Lester Li
@sizhe_lester_li
PhD student @MITEECS @MIT_CSAIL | A capable embodied intelligence must understand its body and use it to the fullest.
Now in Nature! 🚀 Our method learns a controllable 3D model of any robot from vision, enabling single-camera closed-loop control at test time! This includes robots previously uncontrollable, soft, and bio-inspired, potentially lowering the barrier of entry to automation! Paper:…


TRI's latest Large Behavior Model (LBM) paper landed on arxiv last night! Check out our project website: toyotaresearchinstitute.github.io/lbm1/ One of our main goals for this paper was to put out a very careful and thorough study on the topic to help people understand the state of the…
Since its release 11 years ago, just one hour and 31 minutes have passed on Miller's planet in Interstellar.
CryoDRGN-AI ❄️🐉🤖 is now published in @naturemethods!!! So excited to see this out and a huge congrats to @axlevy0 and team! CryoDRGN-AI extends cryoDRGN from requiring input, fixed camera poses, to end-to-end ab initio reconstruction of biomolecules and their conformational…
We present DRGN-AI for fast, ab initio cryo-EM reconstruction! * learns a neural field from unposed images, * designed for single-shot reconstruction of unfiltered datasets, * finds new states missed by prior approaches! Teamwork led by @ZhongingAlong drgnai.cs.princeton.edu 1/
Nature research paper: Plants monitor the integrity of their barrier by sensing gas diffusion go.nature.com/4kpUTFF
We made a @gradio demo for AllTracker! AllTracker is the current state-of-the-art for general-purpose point tracking. The demo gives a good sense of the accuracy---try your own videos and see for yourself! 🔗 Demo: huggingface.co/spaces/aharley… 💻 Code: github.com/aharley/alltra…
Russ's recent talk at Stanford has to be my favorite in the past couple of years. I have asked everyone in my lab to watch it. youtube.com/watch?v=TN1M6v… IMO our community has accrued a huge amount of "research debt" (analogous to "technical debt") through flashy demos and…
You can try training your own Jacobian Fields in 2D! We found that, compared to no-structure, black-box, and direct prediction of optical flow, the Jacobian sparsity structure can generalize to unseen states and motions rapidly with just two training samples. What's a general…
Simple 2D fingers and sliders can learn Jacobians and be controlled from vision, too! Easy to test in your RL / imitation learning environments! Try this yourself! Code: tinyurl.com/nkywe9t2 (9/n)
We have open-sourced everything, including our real-world multiview (12-camera) robot action dataset on four robots. Please check it out! We expect the next implementation of our approach to dramatically reduce the reliance on multiview cameras and rendering - stay tuned! :)…
Now in Nature! 🚀 Our method learns a controllable 3D model of any robot from vision, enabling single-camera closed-loop control at test time! This includes robots previously uncontrollable, soft, and bio-inspired, potentially lowering the barrier of entry to automation! Paper:…
Nature research paper: Controlling diverse robots by inferring Jacobian fields with deep networks go.nature.com/3HXBtKI
Can a robot learn what its own body looks like — just by watching itself move? W/Neural Jacobian Fields (NJF), CSAIL researchers show that it can. Using just one external camera, NJF enables robots to infer the structure of their joints & limbs — no built-in sensors or prior…