Arthur Allshire
@arthurallshire
robotics & machine learning. PhD student @Berkeley_AI. prev EngSci @UofT / @NvidiaAI 🇮🇪 🇨🇦 🇦🇺🇨🇭🇨🇿
our new system trains humanoid robots using data from cell phone videos, enabling skills such as climbing stairs and sitting on chairs in a single policy (w/ @redstone_hong @junyi42 @davidrmcall)
VideoMimic is genuinely really inspiring work. Interacting with terrain is hard, and doing it from just a couple videos is really genuinely impressive.
Full episode dropping soon! Geeking out with @arthurallshire @redstone_hong on VideoMimic videomimic.net Co-hosted by @chris_j_paxton @micoolcho
Full episode dropping soon! Geeking out with @arthurallshire @redstone_hong on VideoMimic videomimic.net Co-hosted by @chris_j_paxton @micoolcho
LEAP Hand now supports Isaac Lab! (adding to Gym, Mujoco, Pybullet) This 1-axis reorientation uses purely the proprioception of the LEAP Hand motors to sense the cube. We open‑source both Python or ROS 2 deployment code! Led by @srianumakonda github.com/leap-hand/LEAP…
🔥🚨 Preprint alert: Relative Entropy Pathwise Policy Optimization #REPPO 🚨🔥 What if you could have on-policy training without the instability and parameter tuning that plagues #PPO? What if training with deterministic policy gradient just worked? With our new method it does!
Awesome to hear about my former labmates doing big things at Skild!
At a robotics lab in Pittsburgh, engineers are building adaptable, AI-powered robots that could one day work where it's too dangerous for humans. The research drew a visit from President Trump, who touted U.S. dominance in AI as companies announced $90 billion in new investments.
This is a great compliment! Our real-to-sim code is now available. It can recover both the environment and target motion from videos. github.com/hongsukchoi/Vi…
VideoMimic is genuinely really inspiring work. Interacting with terrain is hard, and doing it from just a couple videos is really genuinely impressive.
Ep#20 with @arthurallshire @redstone_hong on VideoMimic videomimic.net Co-hosted by @chris_j_paxton @micoolcho
Everyone knows action chunking is great for imitation learning. It turns out that we can extend its success to RL to better leverage prior data for improved exploration and online sample efficiency! colinqiyangli.github.io/qc/ The recipe to achieve this is incredibly simple. 🧵 1/N
The Dex team at NVIDIA is defining the bleeding edge of sim2real dexterity. Take a look below 🧵 There's a lot happening at NVIDIA in robotics, and we’re looking for good people! Reach out if you're interested. We have some big things brewing (and scaling :)
We tested WSRL (Warm-start RL) on a Franka Robot, and it leads to really efficient online RL fine-tuning in the real world! WSRL learned the peg insertion task perfectly with only 11 minutes of warmup and *7 minutes* of online RL interactions 👇🧵
Presenting DemoDiffusion: An extremely simple approach enabling a pre-trained 'generalist' diffusion policy to follow a human-demonstration for a novel task during inference One-shot human imitation *without* requiring any paired human-robot data or online RL 🙂 1/n
Congratulations to BAIR researchers @kevin_zakka @qiayuanliao @arthurallshire @carlo_sferrazza @KoushilSreenath @pabbeel and Google collaborators for winning the Outstanding Demo Paper Award at RSS 2025! playground.mujoco.org
We’re super thrilled to have received the Outstanding Demo Paper Award for MuJoCo Playground at RSS 2025! Huge thanks to everyone who came by our booth and participated, asked questions, and made the demo so much fun! @carlo_sferrazza @qiayuanliao @arthurallshire
Come check out the LEAP Hand and DexWild live in action at #RSS2025 today!
We’re super thrilled to have received the Outstanding Demo Paper Award for MuJoCo Playground at RSS 2025! Huge thanks to everyone who came by our booth and participated, asked questions, and made the demo so much fun! @carlo_sferrazza @qiayuanliao @arthurallshire
We shipped a robot on device and brought to RSS! Please come and check it out 🤖🦾
Presenting FACTR today at #RSS2025 in the Imitation Learning I session at 5:30pm (June 22). Come by if you're interested in force-feedback teleop and policy learning!
Low-cost teleop systems have democratized robot data collection, but they lack any force feedback, making it challenging to teleoperate contact-rich tasks. Many robot arms provide force information — a critical yet underutilized modality in robot learning. We introduce: 1. 🦾A…
Driving down to #RSS2025 with @qiayuanliao, @carlo_sferrazza, and @arthurallshire to demo MuJoCo Playground! We’re excited to host a hands-on demo across 3 hardware platforms—quadruped, humanoid, and hand—and even train + deploy policies live!
Today we're excited to share a glimpse of what we're building at Generalist. As a first step towards our mission of making general-purpose robots a reality, we're pushing the frontiers of what end-to-end AI models can achieve in the real world. Here's a preview of our early…
📣📣📣 Neural Inverse Rendering from Propagating Light 💡 just won Best Student Paper award at #CVPR!!!
📢📢📢 Neural Inverse Rendering from Propagating Light 💡 Our CVPR Oral introduces the first method for multiview neural inverse rendering from videos of propagating light, unlocking applications such as relighting light propagation videos, geometry estimation, or light…
Excited to present VideoMimic this week at #CVPR2025! 🎥🤖 📌 POETs Workshop "Embodied Humans" Spotlight Talk | June 12, Thu, -10:10 | Room 101B 📌 Agents in Interaction: From Humans to Robots Poster #182-#201 | June 12, Thu, -12:15 | ExHall D Come by and chat!…