Justin Cho 조현동
@HJCH0
Contextualizing Human-AI Interactions. NLP PhD candidate @USC_ISI
Can you tell what actions are being mimed in this video? If so, you’re smarter than AI models! Check the last tweet in this thread for answers. In a new paper, we present MIME, which evaluates whether vision language models (VLMs) have a robust understanding of human actions. 🧵
Our next NL seminar is this Thursday! Justin Cho (@HJCH0) is a PhD Candidate at USC. In this talk, he'll present research that explores enabling more useful and contextualized human-AI interactions. Join: bit.ly/4h1RZVM
save coffee for only when you really need it and it will do wonders
✨✨✨Hello everyone, I’m on the faculty job market this year.✨✨✨ I’m completing my PhD at USC, where I study agentic planning in creative contexts. But before I get deeper into my research, I really want to tell you a little bit about myself :)
🚨🚨🚨 New paper drop! 🚨🚨🚨 If you’re a researcher, you’d probably like at least **some** of your work to get covered by the news media. Right? arxiv.org/pdf/2411.13779, with @m1chae1_1u, Sriya Kaylan, @HJCH0 @shi_weiyan and @jonathanmay Only a small % of researchers get…
I'm presenting this work during today's poster session from 10:30AM-12PM at EMNLP! Come by and say hi 👋 x.com/HJCH0/status/1…
✨EMNLP Paper ✨ Wouldn't it be great if we can also listen to LLM responses when we can't look at a screen? Problem: LLMs generate responses without considering the unique constraints of speech 😢 🎉 Let's fix that with Speechworthy Instruction-tuned Language Models