Karthik Mahadevan
@karthikm0
Human-Robot Interaction Researcher | PhD candidate in the @dgpToronto lab at @UofT.
I am on the job market, seeking tenure-track or industry research positions starting in 2025. My research combines human-computer interaction and robotics—please visit karthikmahadevan.ca for updated publications and CV. Feel free to reach out if interested. RT appreciated!
Congrats to @karthikm0 (and amazing co-authors!) on the Best Paper Award at #HRI2025 for "ImageInThat: Manipulating Images to Convey User Instructions to Robots." The paper proposes direct manipulation of images as a new paradigm to instruct robots. 🔗 karthikmahadevan.ca/files/hri25-11…
💼 I'm on the job market for tenure-track faculty positions or industry research scientist roles, focusing on HCI, Human-AI interaction, Creativity Support, and Educational Technology. Please reach out if hiring or aware of relevant opportunities! RT appreciated! 🧵 (1/n)
Collect robot demos from anywhere through AR! Excited to introduce 🎯DART, Dexterous AR Teleoperation interface enabling anyone to teleoperate robots in cloud-hosted simulation. With DART, anyone can collect robot demos anywhere, anytime, for multiple robots and tasks in one…
I'm on the job market👀seeking TT-faculty and post-doc positions starting Fall 2025 to continue my research in family-centered design of socially interactive systems👀 I wrote a "blog" announcing this & my reflections on our latest RO-MAN'24 publication: linkedin.com/posts/bengisuc…
What’s the future of #HCI + #AI innovation? I believe it’s bright! Had some fun writing this article on drawing parallels with the world of mixed martial arts 💪👊 x.com/ToviGrossman/s…
x.com/i/article/1836…
Happy to announce 2 #CHI2024 papers from @ExiiUW, @uwhci, & @UWCheritonCS! First, @nonsequitoria & I show that a text highlight constraint limiting how many words can be highlighted in a document reader can improve reading comprehension. ⭐️📃 Details here: nikhitajoshi.ca/constrained-hi…
📢📢📢 A pulse of light takes ~3ns to pass through a Coke bottle—100 million times less than it takes you to blink. Our work lets you fly around this 3D scene at the speed of light, revealing propagating wavefronts of light that are invisible to the naked eye—from any viewpoint!…
✨ Introducing Keypoint Action Tokens. 🤖 We translate visual observations and robot actions into a "language" that off-the-shelf LLMs can ingest and output. This transforms LLMs into *in-context, low-level imitation learning machines*. 🚀 Let me explain. 👇🧵
Through a weeklong, immersive program at @UofTCompSci’s Dynamic Graphics Project lab, high school students got to know more about graduate school and what it’s like to be a computer science researcher. uoft.me/al-
How do we get robots to efficiently explore diverse scenes and answer realistic questions? e.g., is the dishwasher in the kitchen open❓ 👇Explore until Confident — know where to explore (with VLMs) and when to stop exploring (with guarantees) explore-eqa.github.io
Videos are fun to watch, but editing them ourselves can be challenging! Next week at @ACMIUI '24, I will be presenting #LAVE, a video editing tool that integrates an LLM agent to collaborate with users, exploring the future of agent-assisted content editing.
Our @HCI_Bath group at @UniofBath (ranked 🇬🇧 top 5) is searching for a rockstar Lecturer/Assistant Professor with interests in AR/VR, fabrication, interaction techniques, wearables, BCI, AI/ML🎉 ⏰Deadline: April 5th 💼Apply here: bath.ac.uk/jobs/Vacancy.a… #HCI #CHI2024 #UIST2024
HCI hiring alert! Come and work with us in @HCI_Bath at @UniofBath - we have a lecturer (Assistant Professor) position open to align with our interests in AR/VR, fabrication, wearables etc. Deadline is April 5. Please share with any great candidates! bath.ac.uk/jobs/Vacancy.a…
Fantastic talk by Carolina Parada @carolina_parada from Google Deepmind on using LLMs to control and teach robots. LLMs seem to be the hammer we’ve been looking for in personal robotics. @HRI_Conference