Michael Black
@Michael_J_Black
Director, Max Planck Institute for Intelligent Systems (@MPI_IS). Chief Scientist @meshcapade. Building 3D digital humans using vision, graphics, and learning.
Here's how my recent papers & reviews are going: * To solve a vision problem today, the sensible thing is to leverage a pre-trained VLM or video diffusion model. Such models implicitly represent a tremendous amount about the visual world that we can exploit. * Figure out how to…
Contact. Contact. Contact. For spatial intelligence, this is what location, location, location is in real estate. InteractVLM predicts 3D contacts on humans and objects from a single image. This is a key step in training machines to interact with the 3D world.
🔥 New InteractVLM Models Released! (#CVPR2025) 🔹 Single Model for Joint Human-Object Contact 🔹 3D Human Contact trained on more data, now supports foot-ground contact 🔹 Direct Contact Estimation on Images (2D) 🔗 [Code] github.com/saidwivedi/Int…
🔥 New InteractVLM Models Released! (#CVPR2025) 🔹 Single Model for Joint Human-Object Contact 🔹 3D Human Contact trained on more data, now supports foot-ground contact 🔹 Direct Contact Estimation on Images (2D) 🔗 [Code] github.com/saidwivedi/Int…
Why does 3D human-object reconstruction fail in the wild or get limited to a few object classes? A key missing piece is accurate 3D contact. InteractVLM (#CVPR2025) uses foundational models to infer contact on humans & objects, improving reconstruction from a single image. (1/10)
🚨 Meet Meshcapade at #gamescom2025! 🚨 See next-gen markerless motion capture LIVE in Cologne 🎮✨ 📆 August 20-24, 2025 📍 Booth A 067 | Hall 10.1 @gamescom Catch our latest markerless motion capture demos—interactive, real-time & future-proof. Step into the future of game…
MoCapade 3.5 is officially live this week on our platform! 🚀 🎭 Facial expression tracking 👣 Foot locking Capture full-body motion and facial expressions — no suits, no markers, just one camera. Any camera. Experience the next generation of markerless motion capture. 🎉 Come…
In a time when international visas are getting more difficult to obtain and when we should be reducing CO2, EurIPS is a great idea! ELLIS has been transformational and EurIPS is another example by which it is growing an already thriving AI Ecosystem in Europe.
NeurIPS is pleased to officially endorse EurIPS, an independently-organized meeting taking place in Copenhagen this year, which will offer researchers an opportunity to additionally present their accepted NeurIPS work in Europe, concurrently with NeurIPS. Read more in our blog…
When I first started testing Meshcapade eight months ago, it had the typical foot sliding issue and didn’t capture hand movements. Over time, they’ve improved it, and honestly, version 3.5 delivers remarkably efficient capture
First public look at MoCapade3.5 with optional foot locking and facial motion capture.
Meshcapade released version 3.5 yesterday, which includes 🦿 Foot locking 👦 Facial capture video credit with xsens : Lessi Sitdikova 👉 lnkd.in/d-8y5vkH I believe Meshcapade’s video capture is evolving very well 👏 💪
✨ Big news from Meshcapade! ✨ We’re heading to #SIGGRAPH2025 in Vancouver! When: August 10-14, 2025 Where: Booth 209 @siggraph This year, we’re showcasing TWO demos of our next-gen markerless motion capture 🎥⚡ — built for creators, engineers, and researchers who care about…
A test of what I could do with #rodin convert an image into a 3D model @meshcapade video capture #UnrealEngine5 💻 Tutorial Spanish youtube.com/watch?v=aE5Pdz… 💻 Tutorial English youtu.be/gDi5BbF8pAo?si…
Google is has started using LLMs for Google Translate and is now making things up. It translated "MPI-IS Tübingen" into "Max Planck Institute for Informatics (MPI-IS) Tübingen". MPI-IS is the MPI for Intelligent Systems. The MPI for Informatics (MPI-I) is located in…
Physical intelligence for humanist robots. At @meshcapade we've built the foundational technology for the capture, generation, and understanding of human motion. This blog post explains how this enables robot learning at scale. medium.com/@black_51980/p… perceiving-systems.blog/en/news/toward…
ETCH has been accepted to #ICCV2025 with 456→556 scores. See you all in Hawaii 🌴🥥
💃#ETCH: Generalizing Body Fitting to Clothed Humans via Equivariant Tightness🕺 😎ACCURATE body fitting for 3D clothed humans, even under LOOSE garments, CHALLENGING poses, and EXTREME dynamics! 🔗 Page: boqian-li.github.io/ETCH/ More Info: ⬇️
What happens when you bring production-ready digital humans to the world’s top computer vision conference @CVPR? At #CVPR2025, we showed the future of human modeling, not in theory, but in practice: ✔️ Real-time avatar generation from images and video ✔️ SMPL-based motion and…
Our @ICCVConference HANDS workshop will be on Oct. 20, PM! We focus on hand-related areas, e.g., hand pose est., hand-object interaction, robotics hand manipulation. hands-workshop.org @NUSingapore @CSatETH @unibirmingham @RealityLabs @AIatMeta @UTokyo_News @meshcapade
Mocap Video - How to film ? youtu.be/OwBXSBOV3Sg?si… via @YouTube
Public service announcement -- if you're making a new dataset of human motions in SMPL-X format using a marker-based system, please use MoSh. If you first compute a skeleton and then transfer this to SMPL-X, you will lose a lot of realism. MoSh fits SMPL-X to the markers…
Farewell @CVPR, it was fantastic being back!! Huge thanks to everyone who stopped by at the Meshcapade booth, our posters presentations and workshops 🥰 Still can’t believe our tiny team of intrepid scientific explorers ended up being part of ALL of that! 🚀 We had incredible…
The @UnitreeRobotics G1 seeing itself for the first time :)
At #CVPR25, first time at a conference I’ve seen a robot walking around trying out the demos. Won’t be the last! It’s all fun and games until this guys twice as tall and the one writing the checks.
Congratulations @saidwivedi and team on winning the ROBIN human contact challenge.
Come join us at the InteractVLM poster: #147, ExHall D, 10:30 AM–12:30 PM today (Fri, June 15) at #CVPR2025! We present the world’s most accurate in-the-wild 3D contact detector for both humans and objects—winner of two human contact challenges 🏆 at the RHOBIN workshop @CVPR.