Jesse Thomason
@_jessethomason_
Assistant Prof @CSatUSC leading the GLAMOR lab http://glamor.rocks (he/him; 💖💜💙)
The CoRL party continues tomorrow! My student @Ishika_S_ is helping to organize LangRob (sites.google.com/view/langrob-c…) where I'll give the last invited talk at 1600, and she is also presenting our ongoing work on symbolic planning and LLMs (arxiv.org/abs/2406.02791) at LEAP. Come see!
And here’s some great pics with my other awesome advisors too :) @JosephLim_AI @_jessethomason_
In other news, ReWiND won best paper at the OOD workshop at RSS yesterday! If you haven’t already, check out: 🕸️📑: rewind-reward.github.io Or if you’re lazy like me read @Jesse_Y_Zhang’s tweet 😉
Reward models that help real robots learn new tasks—no new demos needed! ReWiND uses language-guided rewards to train bimanual arms on OOD tasks in 1 hour! Offline-to-online, lang-conditioned, visual RL on action-chunked transformers. 🧵
Are current eval/deployment practices enough for today’s robot policies? Announcing the Eval&Deploy workshop at CoRL 2025 @corl_conf, where we'll explore eval + deployment in the robot learning lifecycle and how to improve it! eval-deploy.github.io 🗓️ Submissions due Aug 30
🌐 Project: liralab.usc.edu/handretrieval/ 📄 Paper: arxiv.org/abs/2505.20455 Amazing collaborators: @matthewh6_ , @aliangdw , @minjunkevink , Harshitha Rajaprakash, @_jessethomason_ , @ebiyik_ @matthewh6_ will apply for PhD this year!
Human-AI decision making is probably something I think is bad but inevitable, so the least we can do is explore ways to reduce inappropriate human reliance on AI system output. In @_Tejas_S_ latest work, he does just that, successfully mitigating /both/ over- and under-reliance.
People are relying on AI assistance to make all kinds of decisions. *How* they incorporate AI recommendations is influenced by previous user-AI interactions and their evolving trust in the AI, which AI assistants are typically blind to. But what if they weren’t? We show that…
Excited for @_abraranwar's work pushing the frontier of active evaluation for robot policies. Big, neural, autoregressive models in other fields get evaluated robustly, but huge eval in robotics is too costly. We need to find the right experiments for the right experimenter cost!
All these VLAs allow robots to do more tasks, but when you're physically testing many policies, it's hard to eval on every task! We take advantage of shared information between tasks and within policies to actively test multi-task robot policies! 1/7 🧵 arxiv.org/abs/2502.09829
I missed this post back in JULY when Tanmay made it but it's prescient and even more relevant now. Ccore NLP folks, remember not to re-invent the wheel. Agents are a thing in robotics and reinforcement learning and planning. We have algorithms! Come chat with us!
Do we need to narrowly redefine "Agent" for LLM-Agents or can we just borrow a broader definition from RL / Embodied AI literature? LLM Agents are agentic in the same sense that a trained robot or an RL policy is agentic. Making this connection more explicit allows us to borrow…
Do we need to narrowly redefine "Agent" for LLM-Agents or can we just borrow a broader definition from RL / Embodied AI literature? LLM Agents are agentic in the same sense that a trained robot or an RL policy is agentic. Making this connection more explicit allows us to borrow…
❓What is an agent? I get asked this question a lot, so I wrote a little blog on this topic and other things: - What is an agent? - What does it mean to be agentic? - Why is “agentic” a helpful concept? - Agentic is new Check it out here: blog.langchain.dev/what-is-an-age…
Come say hi! #EMNL2024 this week, featuring research by @CSatUSC researchers @swabhz @robinomial @_jessethomason_ @xiangrenNLP @jaspreetranjit_ and more!✨ @USCViterbi @USCAdvComputing
Welcome reception is in full swing at #EMNLP2024
Jesse Thomason gave a really interesting talk where he spent a while talking about how the language we use in robotics is really lame and uninteresting, and most robotics problems are "below" the level of meaningful language
Abrar will be presenting his @NVIDIAAI work, ReMEMbR, as a spotlight during LangRob @ CoRL at 0930, and will be at the 1000 poster session on the first floor!
Robots are deployed for long periods of time, but how can they answer questions and generate goals based on their long-horizon history? During my internship at #NVIDIA, we built ReMEmbR, a retrieval-augmented memory for embodied robots. 1/8 🧵 nvidia-ai-iot.github.io/remembr/
USC research at #CoRL2024! 👏✨ Topics include zero-shot robotic manipulation, comparative language feedback, bimanual manipulation + more 👇 viterbischool.usc.edu/news/2024/11/u… @corl_conf @yuewang314 @ebiyik_ @daniel_t_seita @_jessethomason_ @gauravsukhatme @USCViterbi @USCAdvComputing
Thanks to @mark_riedl and other GaTech organizers for inviting me to talk about the work from our GLAMOR lab at the Summit on Responsible Computing, AI, and Society. I am excited about the momentum behind increasing crosstalk between researchers in AI, HCI, policy, and health!
Hi all. I am co-organizing a Summit on Responsible Computing, AI, and Society at @GeorgiaTech Oct 28-30. We will actively discuss the future of human-centered AI, healthcare, sustainability, and tech policy. rcais.github.io You should come.