Violet Peng
@VioletNPeng
Associated Professor@UCLA-CS. Research NLP, AI creativity, controllable generation, model evaluation, computational journalism, event. (she/her/hers)
Thrilled, grateful, and humbled to have won 3 outstanding paper awards at #EMNLP2024!!! Not even in my wildest dreams. Immense thanks to my amazing students and collaborators! All three works are on evaluating LLM’ abilities in creative narrative generation. 🧵👇
Big congrats 🎉🎊🍾 to UCLA PLUS lab @VioletNPeng and collaborators won 3! !outstanding paper awards at #EMNLP2024 👏👏👏
🚨Thrilled to share our new work: AI debate combats misinformation better than single AI advisors! 🤔We tested if two AIs debating opposite sides helps biased humans judge controversial COVID-19 claims more accurately. Paper: arxiv.org/abs/2506.02175 🧵👇 #AI #Debate
🤩One last call for poster! Check out for our 💥 𝐕𝐈𝐒𝐂𝐎 💥 benchmark to have a deeper understanding for 𝐕𝐋𝐌 𝐬𝐞𝐥𝐟-𝐜𝐫𝐢𝐭𝐢𝐪𝐮𝐞 𝐚𝐧𝐝 𝐫𝐞𝐟𝐥𝐞𝐜𝐭𝐢𝐨𝐧. ⏱️Come visit us at 𝐄𝐱𝐇𝐚𝐥𝐥 𝐃 #𝟑𝟗𝟔 𝐚𝐭 𝟒-𝟔𝐩𝐦!
Can VLMs improve 𝘁𝗵𝗲𝗺𝘀𝗲𝗹𝘃𝗲𝘀💪? We propose🔥𝗩𝗜𝗦𝗖𝗢, a benchmark to evaluate VLMs’ 𝗰𝗿𝗶𝘁𝗶𝗾𝘂𝗲 and 𝗰𝗼𝗿𝗿𝗲𝗰𝘁𝗶𝗼𝗻 capabilities, towards the higher goal of VLMs autonomous self-improvement. 🌐Project: visco-benchmark.github.io 📄Paper: arxiv.org/abs/2412.02172
🌏How culturally safe are large vision-language models? 👉LVLMs often miss the mark. We introduce CROSS, a benchmark of 1,284 image-query pairs across 16 countries & 14 languages, revealing how LVLMs violate cultural norms in context. ⚖️ Evaluation via CROSS-EVAL 🧨 Safety…
For this week’s NLP Seminar, we are thrilled to host Emma Pierson @2plus2make5 to give a talk titled Using New Data to Answer Old Questions! When: 5/16 Fri 2pm PT Registration: forms.gle/9sNYv2isfcqYQC…
I’ve seen many questions about how to choose ARR tracks for submissions aim at the new tracks at #emnlp2025. We actually wrote a blogpost along with the 2nd CFP exactly to address this: 2025.emnlp.org/track-changes/ Please help us share it widely! Good luck with your emnlp submissions!
Happy to see #EMNLP2025 introducing new tracks on AI/LLM Agents, Code Models, Safety & Alignment, Reasoning, LLM Efficiency, and more. Big thanks to the organizers for making this happen! @emnlpmeeting #NLProc Perfect venue for agentic research and language technologies.…
🚨 2nd CFP for #EMNLP25 is out! We (PCs) have introduced: ✅ New submission topics 🎯 A theme track on Interdisciplinary Recontextualization of NLP 📜 Policies to penalize low-quality reviews & reward high-quality ones, with @ReviewAcl 📝 CFP: 2025.emnlp.org/calls/main_con… 🧵 Blog on…
For this week’s NLP Seminar, we are thrilled to host Aditya Kusupati @adityakusupati to give a talk titled Matryoshka Principles for Adaptive Intelligence! When: 5/9 Fri 2pm PT Registration: forms.gle/j3LNvNbdfnu1xL…
Heyy NAACL 2025! What do 📰 journalism, 🎵 music lyric composition, ⚖️ legal writing, 💭 psychological counseling and 🍽️ menu design all have in common? Please come by and see our tutorial, Creative Planning, with @VioletNPeng @TenghaoHuang45 @PhilippeLaban
And now in ICML 2025!
Everyone knows the importance of data, and thus synthetic data. But how to generate them so they help recognizing novel concept? Contractive features are what you need! 👇
Everyone knows the importance of data, and thus synthetic data. But how to generate them so they help recognizing novel concept? Contractive features are what you need! 👇
#GPT4o image generation brings synthetic visual data quality to the next level. 🖼️ 🤔Is synthetic visual data finally ready to be used for improving VLMs? 🚀 We show success with CoDA, using contrastive visual data augmentation to help teach VLMs novel and confusing concepts.
lol
Adam deserves the award, but in Singapore everyone still uses SGD
Finally wrapped my first time serving as a program co-chair. Learned so much from my fellow co-chairs and felt bitter sweet to say bye to our Monday regular meetings. Ensuring peer review quality and getting great ideas popularized is an ongoing mission. Next: #EMNLP25 Fight on!
Huge thanks to the #ICLR2025 Organizing Committee (including many who couldn't make it to the conference) 👏👏👏
Excited to speak more about AI creativity at SSNLP today in Singapore ssnlp-website.github.io/ssnlp25/ Also look forward to hear what Qwen team has to say about their latest breakthrough! Friends in Singapore: let’s catch up!
Excited to be at #ICLR2025 🇸🇬 between 4/24 and 4/28 and sharing this work on Multimodal RAG. Presenting this work on 4/26 Saturday 3pm - 5:30pm at Hall 3 + Hall 2B #108. I'm also happy to chat about multimodal models, 3D vision-language, and embodied AI in general with old…
🚀Introducing MRAG-Bench: How do Large Vision-Language Models utilize vision-centric multimodal knowledge? 🤔Previous multimodal knowledge QA benchmarks can mainly be solved by retrieving text knowledge.💥We focus on scenarios where retrieving knowledge from image corpus is more…
For this week’s NLP Seminar, we are thrilled to host David Bamman @dbamman to give a talk titled Measuring Representation and Linguistic Variation in Hollywood! 🗓️ 4/18 Fri 2pm PT Registration: forms.gle/hge16zkv3YnzvR…
Announcing the keynote speakers for #ICLR2025! Speakers will cover topics ranging from foundational advances in language models, AI safety, open-ended learning, and the nature of intelligence itself. blog.iclr.cc/2025/04/11/ann…
Wondering what review scores you need to get accepted at ACL? Maybe this data from NAACL 2025 can help: gist.github.com/aritter/8b65a9…
We’re starting a new NLP seminar series! If you’re visiting the area and want to stop by, please let us know!
🚨 New NLP seminar series alert! 🚨 Check out UCLA NLP Seminar series featuring cutting-edge talks from top researchers in NLP and related areas. Great lineup, timely topics, and open to all (zoom)! 🧠💬 📅 Schedule + details: uclanlp.github.io/nlp-seminar/
new paper! 🌱 Collapse of Dense Retrievers We uncover major vulnerabilities in dense retrievers like Contriever, showing they favor: 📌 Shorter docs 📌 Early positions 📌 Repeated entities 📌 Literal matches ...all while ignoring the answer's presence! huggingface.co/datasets/mohse…