John Zhang
@johnzhangx
phd student @CarnegieMellon, @cmurexlab|prev @GeorgiaTech
dynamic whole-body locomotion and manipulation 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝗼𝗳𝗳𝗹𝗶𝗻𝗲 𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴. very simple online sampling with mppi is all you need!! website: whole-body-mppi.github.io arxiv: arxiv.org/abs/2409.10469
Robots developed by @CMU_Robotics are helping to paint the future. Literally. 🤖🎨 Collaborative FRIDA (CoFRIDA) interactively co-paints with people, working with users of any artistic ability to invite collaboration to create art in the real world. cmu.is/CoFRIDA
We interact with dogs through touch -- a simple pat can communicate trust or instruction. Shouldn't interacting with robot dogs be as intuitive? Most commercial robots lack tactile skins. We present UniTac: a method to sense touch using only existing joint sensors! [1/5]
🎉Excited to share that our paper was a finalist for best paper at #HRI2025! We introduce MOE-Hair, a soft robot system for hair care 💇🏻💆🏼 that uses mechanical compliance and visual force sensing for safe, comfortable interaction. Check it out: moehair.github.io 🧵1/7
RL is notoriously sample inefficient. How can we scale RL on tasks much slower to simulate than rigid body physics, such as soft bodies? In our #ICLR2025 spotlight, we introduce both a new first-order RL algorithm, SAPO, and differentiable simulation platform, Rewarped. 1/n
Sharing my recent project, agent-to-sim: From monocular videos taken over a long time horizon (e.g., 1 month), we learn an interactive behavior model of an agent (e.g., a 🐱) grounded in 3D. gengshan-y.github.io/agent2sim-www/
Excited to finally release our NeurIPS 2024 (spotlight) paper! We introduce Run-Length Tokenization (RLT), a simple way to significantly speed up your vision transformer on video with no loss in performance!
We're presenting Jacta: a versatile planner for learning dexterous and whole-body manipulation this week at CoRL! website jacta-manipulation.github.io paper arxiv.org/abs/2408.01258
Our team is presenting work at the Conference on Robot Learning, @corl_conf, in Munich, Germany this week! Learn more about our accepted research — theaiinstitute.com/news/corl-roun…
Can robots make pottery🍵? Throwing a pot is a complex manipulation task of continuously deforming clay. We will present RoPotter, a robot system that uses structural priors to learn from demonstrations and make pottery @HumanoidsConf @CMU_Robotics 👇robot-pottery.github.io 1/8🧵
Join us TOMORROW in welcoming Dr. Zac Manchester (@zacinaction ) as he presents “Composable Optimization for Robotic Motion Planning and Control” from 10:30AM - 11:45AM. More info: grasp.upenn.edu/events/spring-… #GRASP #GRASPLab #GRASPonRobotics @GRASPSeminar
To support richer, human-robot interaction, we made FRIDA more collaborative. CoFRIDA can take turns with a person to create drawings and paintings 🧵 ICRA'24 @ieee_ras_icra @CMUBigLab @GauravTParmar @junyanz89 @1x @JeanOhCmuBIG
Introducing Open-World Mobile Manipulation 🦾🌍 – A full-stack approach for operating articulated objects in open-ended unstructured environments: Unlocking doors with lever handles/ round knobs/ spring-loaded hinges 🔓🚪 Opening cabinets, drawers, and refrigerators 🗄️ 👇…
Deformable objects are common in household, industrial and healthcare settings. Tracking them would unlock many applications in robotics, gen-AI, and AR. How? Check out MD-Splatting: a method for dense 3D tracking and dynamic novel view synthesis on deformable cloths. 1/6🧵
Can we effectively use LLMs for video question answering? Excited to announce our latest paper, Zero-Shot Video Question Answering with Procedural Programs, which uses LLMs to generate programs that answer questions about videos! [1/6]