hyunji amy lee
@hyunji_amy_lee
Incoming postdoc @unc_ai_group w/ @mohitban47. PhD student @kaist_ai. Previously: @allen_ai @Adobe.
🥳Excited to share that I’ll be joining @unccs as postdoc this fall. Looking forward to work with @mohitban47 & amazing students at @unc_ai_group. I'll continue working on retrieval, aligning knowledge modules with LLM's parametric knowledge, and expanding to various modalities.

PS. FYI, Hyunji (Amy)'s expertise/interests are in retrieval, aligning knowledge modules with LLM's parametric knowledge, and expanding to various modalities, with diverse+extensive work at KAIST, AI2, Adobe, MSR, etc., details at --> amy-hyunji.github.io
🎉 Yay, welcome @hyunji_amy_lee -- super excited to have you join us as a postdoc! 🤗 Welcome to our MURGe-Lab + @unc_ai_group + @unccs family & the beautiful Research Triangle area -- looking forward to the many fun+impactful collaborations together 🔥
🥳Excited to share that I’ll be joining @unccs as postdoc this fall. Looking forward to work with @mohitban47 & amazing students at @unc_ai_group. I'll continue working on retrieval, aligning knowledge modules with LLM's parametric knowledge, and expanding to various modalities.
RAG and in-context learning are the go-to approaches for integrating new knowledge into LLMs, making inference very inefficient We propose instead 𝗞𝗻𝗼𝘄𝗹𝗲𝗱𝗴𝗲 𝗠𝗼𝗱𝘂𝗹𝗲𝘀 : lightweight LoRA modules trained offline that can match RAG performance without the drawbacks
1/ Are two #LLMs better than one for equitable cultural alignment? 🌍 We introduce a Multi-Agent Debate framework — where two LLM agents debate the cultural adaptability of a given scenario. #ACL2025 🧵👇
CFP of the Wordplay 2025 (EMNLP) is live! wordplay-workshop.github.io
Announcing the 5th Wordplay Workshop at EMNLP 2025 (Suzhou, China). We are co-organizing the CPDC Challenge (total prize value USD 20K!!!), the warm-up round is starting now! wordplay-workshop.github.io
🚨 New Paper 🧵 How effectively do reasoning models reevaluate their thought? We find that: - Models excel at identifying unhelpful thoughts but struggle to recover from them - Smaller models can be more robust - Self-reevaluation ability is far from true meta-cognitive awareness
Wonder why DPO works so well? Check out our paper led by @Yunjae_Won_ for deep insights into its effectiveness and behavior from an information-theoretic perspective!
[1/6] Ever wondered why Direct Preference Optimization is so effective for aligning LLMs? 🤔 Our new paper dives deep into the theory behind DPO's success, through the lens of information gain. Paper: "Differential Information: An Information-Theoretic Perspective on Preference…
🚨 New Paper co-led with @bkjeon1211 🚨 Q. Can we adapt Language Models, trained to predict next token, to reason in sentence-level? I think LMs operating in higher-level abstraction would be a promising path towards advancing its reasoning, and I am excited to share our…
New preprint 📄 (with @jinho___park ) Can neural nets really reason compositionally, or just match patterns? We present the Coverage Principle: a data-centric framework that predicts when pattern-matching models will generalize (validated on Transformers). 🧵👇