Uri Berger
@uriberger88
Computer Science PhD candidate (NLP joint program at the Hebrew University of Jerusalem and The University of Melbourne)
Ever wondered if LLMs can blend into human conversations where there's no turn-taking and everyone speaks whenever they choose? Check out our new paper on asynchronous communication, using the game of Mafia as a test case.
Asynchronous settings are everywhere: from Slack threads to classroom discussions. This work is a step toward the next frontier: socially aware AI that knows not just what to say, but when to say it!🚀 📄arxiv.org/abs/2506.05309 w/ @uriberger88 @GabiStanovsky 🧵7/7
Had an awesome time presenting both my talk and poster @naaclmeeting! Will miss having beer at the Sister pub 🍻 🎤 arxiv.org/abs/2409.16646 📌 arxiv.org/abs/2406.13274


Tomorrow I'll present "The State and Fate of Summarization Datasets" at #NAACL2025! I'll cover gaps in terminology, discoverability and multilingual coverage across 130+ datasets in 104 languages, and share how our work can help navigate this space. 🗓️Fri May 2, 12:00 PM Ruidoso
If you're in @naaclmeeting and interested in cross-cultural research (like everyone else here...) come see my talk today. Ruidoso room 17:00, see you there :)
Have you ever wondered if speakers of different languages focus on different entities when viewing the same image? Check our recent work to find out! arxiv.org/abs/2409.16646 w\ @PontiEdoardo
We're at #NAACL2025! Presenting: 📍Cross-Lingual and Cross-Cultural Variation in Image Descriptions Thu May 1, 5:00 PM Ruidoso 📍The State and Fate of Summarization Datasets: A Survey Fri May 2, 12:00 PM Ruidoso @uriberger88 , @Shachar_Don, @Dahan_Noam
To appear at #NAACL2025 (2 orals, 1 poster)! @colemanhaley22: which classes of words are most grounded on (perceptual proxies of) meaning? @uriberger88: how do image descriptions vary across languages and cultures? @huhanxu1: can LLMs follow sequential instructions? 🧵below
"Summarize this text" out ❌ "Provide a 50-word summary, explaining it to a 5-year-old" in ✅ The way we use LLMs has changed—user instructions are now longer, more nuanced, and packed with constraints. Interested in how LLMs keep up? 🤔 Check out WildIFEval, our new benchmark!
Happy to share that our paper, Cross-Lingual and Cross-Cultural Variation in Image Descriptions, has been accepted to NAACL 2025! 🎊
Have you ever wondered if speakers of different languages focus on different entities when viewing the same image? Check our recent work to find out! arxiv.org/abs/2409.16646 w\ @PontiEdoardo
It's raining conference decisions, congrats to the first authors @ZeroyuHuang @huhanxu1 @uriberger88 @colemanhaley22 and the rest of the team!
Have you ever wondered if speakers of different languages focus on different entities when viewing the same image? Check our recent work to find out! arxiv.org/abs/2409.16646 w\ @PontiEdoardo
Do speakers of different languages talk differently about what they see? We measure the saliency of entities mentioned in image captions of 31 languages to answer: sometimes they do! Kudos to @uriberger88 for leading the project
Language Models learn data with complex structures. Can they learn simple ones? According to our new #EMNLP2024 paper “Exploring the Learning Capabilities of Language Models using LEVERWORLDS”, the answer is “Yes, but not so fast…” w/ @amir_feder and @AbendOmri (1/5)