Jack Saunders
@jack_r_saunders
Talking about everything to do with Facial Avatars | PhD Student | Founder of @realsyncai
🤖 🗞️ Free AI Generated Digital Humans Newsletter -> lovable.dev/projects/6849f… Like everyone else, I've been struggling to keep up with the sheer volume of papers and news in the Digital Human space. Over the past few weeks, I've developed an agentic (ish) AI pipeline to find and…

⏱️ Turn yourself into a 3D Avatar in real-time with StreamME from Adobe and the University of Rochester (code coming soon) StreamME: Simplify 3D Gaussian Avatar within Live Stream TLDR: This work speeds up Gaussian reconstruction using motion-aware anchor points to prevent the…
SoulDance: Music-Aligned Holistic 3D Dance Generation via Hierarchical Motion Modelling TLDR: A large dataset and evaluation framework (with new metrics) for 4D dance generation from music audio. 📽️ Project Page: xjli360.github.io/SoulDance/ 📜 Paper: arxiv.org/abs/2507.14915
SnapMoGen: Human Motion Generation from Expressive Texts TLDR: This is a method for animating digital characters from text. The model uses multi-scale tokenisation with a masked generative transformer. It is trained on a novel dataset (which should be released) consisting of 44…
3DGH: 3D Head Generation with Composable Hair and Face TLDR: A Generative model of (static) Gaussian Avatars using a GAN. Here the head and hair are modelled separately with templates, and data is generated synthetically. 📽️ Project Page: c-he.github.io/projects/3dgh/ 📜 Paper:…

💫 Animate any rig using video diffusion models. Not a human-specific method, but really interesting as an idea. AnimaX: Animating the Inanimate in 3D with Joint Video-Pose Diffusion Models TLDR: Animate any rig with an arbitrary skeleton using multi-view video diffusion…