Jensen (Jinghao) Zhou
@jensenzhoujh
cooking foundation spatial AI model @AIatMeta @Oxford_VGG @OxfordTVG | prev @StabilityAI @GoogleAI
Hi there, 🎉 We are thrilled to introduce Stable Virtual Camera, a generalist diffusion model designed to address the exciting challenge of Novel View Synthesis (NVS). With just one or a few images, it allows you to create a smooth trajectory video from any viewpoint you desire.…
this new AI is mind blowing.. Stability AI just dropped Stable Virtual Camera - turn 2D image into 3D video(32 depth) - control camera movements with 360° spins, spirals, dolly zooms - keeps everything 3D-consistent for up to 1000 frames
We are releasing Stable Virtual Camera V1.1 as a minor update, fixing a known issue where foreground objects sometimes detach from the background. See github.com/Stability-AI/s… for details.
I enjoyed reading the paper. Some sampling and architectural design (e.g., RoPE) was/is being explored in the virtual camera project. Essentially, it is about how to tame a video model taking in non-casual (i.e., shuffled) input & anchors and generating casual short clips in…
FramePack is out Packing Input Frame Context in Next-Frame Prediction Models for Video Generation
In Transformers, as input length increases, the variance of each attention output feature diminishes, leading to a collapsed and overly concentrated representation in the infinite-length limit. Modern normalization techniques, such as LN, mitigate this by independently shifting…
#Llama4 uses inference-time temperature scaling to improve length generalization. We just released a new report on this (with @gboduljak & @jensenzhoujh)! Check it out while it's fresh: ruiningli.com/vanishing-vari… & arxiv.org/abs/2504.02827. TLDR We present a vanishing variance…
Not just virtual. Visceral. Stable Virtual Camera simulates exposure through the eye of experience. 📸: Photo by @markb_boss - unsplash.com/photos/a-view-…
Climbing fast… 🚀 ⚡️#2 among all demos ⚡️#21 among all models with 3.3K downloads!
Glad to see that Stable Virtual Camera is now trending #1 among Huggingface Image-to-Video models and #10 among all demos! Keep it up, folks! 💪
Playing around with Stable Virtual Camera, turning this 2D image into a 3D promo for my latest kicks... #stablevirtualcamera #2Dto3D #sneakers
Stability AI unveiled Stable Virtual Camera, a new diffusion model It transforms single images into 3D videos with 14 dynamic camera paths Currently in research preview under a non-commercial license!
Another attempt. with stable virtual camera. Didn't quite get the other side right. But still interesting.
Stability AI’s new AI model turns photos into 3D scenes tcrn.ch/4hirJGY
Now the spring of Oxford is fully✨IMMERSIVE✨ with a little touch of Stable Virtual Camera! @UniofOxford @Oxford_VGG @NewCollegeOx @StabilityAI
Today marks the start of spring, according to the astronomical calendar 🌸🌼 📷 | LinaFromOxford (Instagram), Spiralling_Oxford (Instagram) & @NewCollegeOx #SpringEquinox
It's cool to see some 3DGS results on the synthesized views!
And then train a 3dgs based on that. So this can be a single image to 3dgs pipeline. :)
Preprint of today: Zhou and Gao et al., "Stable Virtual Camera: Generative View Synthesis with Diffusion Models" -- stable-virtual-camera.github.io A fully open-sourced video model that can generate novel views. Another video prior to use!
Stable Virtual Camera: This multi-view diffusion model transforms 2D images into immersive 3D videos with realistic depth and perspective
Glad to see that Stable Virtual Camera is now trending #1 among Huggingface Image-to-Video models and #10 among all demos! Keep it up, folks! 💪


Thanks @BoyuanChen0 for your kind words! Great work on History-Guided Video Generation! I quickly tested our model to see if it would support long-trajectory navigation out-of-the-box when that paper came out-It did not work very well. Diffusion-Forcing on Stable Virtual Camera?
A couple of weeks ago I advised some MIT undergrads novel view synthesis will be solved this year, and it turns out it's happening even sooner than we expected