Minwoo (Josh) Kang
@joshminwookang
CS PhD Student @UCBerkeley @berkeley_ai | @WilliamsCollege '20
Can LLMs assist public opinion survey designs by predicting responses? We fine-tune LLMs on our new large-scale survey response dataset, SubPOP, which reduces the distributional gap between human-LLM predictions by up to 46% 📊 A 🧵 on our findings: 👇
(1/n) 🧵 Can Large Language Models simulate different individuals' beliefs and opinions? Checkout our paper on conditioning LLMs to virtual personas for approximating individual human samples at #EMNLP2024! Paper: arxiv.org/abs/2407.06576… Code: github.com/CannyLab/antho…
🔍 Just dropped: “Puzzled by Puzzles: When Vision-Language Models Can’t Take a Hint” 👉 arxiv.org/abs/2505.23759 Puns + pictures + positioning = a nightmare for today’s AI. These models just don’t get it (yet).😵💫 Check out the 🧵 to see our findings (1/4) #AI #Multimodal #VLM
Happy to announce the first workshop on Pragmatic Reasoning in Language Models — PragLM @ COLM 2025! 🧠🎉 How do LLMs engage in pragmatic reasoning, and what core pragmatic capacities remain beyond their reach? 🌐 sites.google.com/berkeley.edu/p… 📅 Submit by June 23rd
LLMs have behaviors, beliefs, and reasoning hidden in their activations. What if we could decode them into natural language? We introduce LatentQA: a new way to interact with the inner workings of AI systems. 🧵