yuyin zhou
@yuyinzhou_cs
Assistant Professor @ucsc, Postdoctoral Researcher @Stanford @StanfordAIMI, Medical Image Analysis, Machine Learning / former Ph.D. @JohnsHopkins M.S. @UCLA
[1/7] Excited to share our new survey on Latent Reasoning! The field is buzzing with methods—looping, recurrence, continuous thoughts—but how do they all relate? We saw a need for a unified conceptual map. 🧵 📄 Paper: arxiv.org/abs/2507.06203 💻 Github: github.com/multimodal-art…
🙌 We've released the full version of our paper, OpenVLThinker: Complex Vision-Language Reasoning via Iterative SFT-RL Cycles Our OpenVLThinker-v1.2 is trained through three lightweight SFT → RL cycles, where SFT first “highlights” reasoning behaviors and RL then explores and…
Top AI Papers of The Week (July 7 - 13): - H-Net - HIRAG - Kimi K2 - MemAgent - Adaptive Branching MCTS - A Survey on Latent Reasoning - What Has a Foundation Model Found? Read on for more:
If you are at ICML2025, welcome to attend our workshop "Multi-modal Foundation Models and Large Language Models for Life Sciences" tomorrow (Jul 19 Saturday). The workshop features a stellar lineup of invited speakers, including Valentina Boeva @val_boeva, Manolis Kellis…
Just arrived at #ICML25! If you're interested in multimodality, reasoning, and safety — let's connect. Additionally, I'll also be presenting two papers: 📄 What If We Recaption Billions of Web Images with LLaMA-3? 🗓 Tue, Jul 15 | 11 AM – 1:30 PM PDT 📍 East Hall A-B (#E-3305)…
Models' hidden thoughts make a big impact. That's why "A Survey on Latent Reasoning" is a must-read. It explores how models reason in hidden states — Latent Chain-of-Thought, covering: - Higher-bandwidth latent reasoning - 2 key approaches: vertical vs. horizontal - Training…
Most current language models think out loud, stuffing every thought into words. A typical token set holds about 40000 choices, which equals roughly 15 bits of data, just under 2 bytes. When a language model must pour every reasoning step through these tiny packets, complex…
A Survey of Latent Reasoning Nice overview on the emerging field of latent reasoning. Great read for AI devs. (bookmark it)
📢 Women in #MICCAI Webinar: From Paper to Story: Crafting Effective Research Presentations - Best Practices from Oral Presentation Winner and Experienced Researchers 📅 Thursday, July 10, 2025 ⏲️ 4:30 PM PDT / 7:30 PM EDT / 11:30 PM UTC 📝Registration: us02web.zoom.us/webinar/regist…
🧵 1/ 🚀 Excited to share our latest work: Fractional Reasoning. We introduce a new way to continuously control the depth of reasoning and reflection in LLM for scaling test time compute, not just switch between “on” and “off” prompts. 💻 Website: shengliu66.github.io/fractreason/ #AI…
🥳Thrilled to share that our paper SPA: Efficient User-Preference Alignment against Uncertainty in Medical Image Segmentation has been accepted to @ICCVConference 🎉🎉 We present SPA — an efficient, user-friendly framework that aligns medical segmentation to clinician…
OpenVision is accepted by #ICCV2025 🥳🥳 Additionally, stay tuned for v2, arriving very soon with even greater efficiency and capability.
Still relying on OpenAI’s CLIP — a model released 4 years ago with limited architecture configurations — for your Multimodal LLMs? 🚧 We’re excited to announce OpenVision: a fully open, cost-effective family of advanced vision encoders that match or surpass OpenAI’s CLIP and…
Thank you @GoogleResearch for supporting our healthcare research! Honored to be one of the Google Research Scholars this year!
We’re announcing the 87 professors selected for the 2025 Google Research Scholar Program — join us in congratulating these exceptional recipients and learn more about their groundbreaking work at goo.gle/rs-recipients. #GoogleResearch #GoogleResearchScholar
🚨 Deadline Extended! ICCV 2025 Workshop CVAMD has extended the submission deadline to June 30 (AoE)! 📅 Workshop Date: October 19–20, 2025 (in conjunction with ICCV) 📍 Location: Honolulu, Hawaii 🌐 Website: cvamd.github.io/CVAMD2025/ Feel free to share and submit! #ICCV2025
The ddl is now extended to June 30, and we will have multiple oral and best paper awards for different categories. Welcome those submitted/published papers to submit to our highlight track.
🚨 Deadline Extended! ICCV 2025 Workshop CVAMD has extended the submission deadline to June 30 (AoE)! 📅 Workshop Date: October 19–20, 2025 (in conjunction with ICCV) 📍 Location: Honolulu, Hawaii 🌐 Website: cvamd.github.io/CVAMD2025/ Feel free to share and submit! #ICCV2025
If you're passionate about building the world's first AI virtual cell model, we're hiring at @Xaira_Thera-- come join us! job-boards.greenhouse.io/xairatherapeut…
🚀 Xaira Therapeutics has just dropped a game-changer for AI-driven biology. Today, we unveiled X-Atlas/Orion, the largest publicly available genome-wide Perturb-seq dataset to date—spanning 8.4 million single cells with perturbations across all ~20,000 human protein-coding…
Mourning the passing of @atulbutte. I learned so much from the precious few chances I had to be with him. He's someone who always picked up the phone whenever I needed advice, on anything, anytime. A great man who made everyone around him better.
Please visit our poster this afternoon. @Jinrui_Yang_ @yuyinzhou_cs and myself will all be there
I will present my poster, "LayerDecomp," at #CVPR2025 Friday afternoon (June 13th). ⏰Time: 4PM - 6PM. 📍Location: #217, Exhibit Hall D. Stop by and say hi! I'd love to chat with you about our research. #CVPR2025 #CVPR25 #ComputerVision #GenAI #AIResearch #DeepLearning
I will present my poster, "LayerDecomp," at #CVPR2025 Friday afternoon (June 13th). ⏰Time: 4PM - 6PM. 📍Location: #217, Exhibit Hall D. Stop by and say hi! I'd love to chat with you about our research. #CVPR2025 #CVPR25 #ComputerVision #GenAI #AIResearch #DeepLearning
🚀 Introducing LayerDecomp: our latest generative framework for image layer decomposition, which can output photorealistic clean backgrounds and high-quality transparent foregrounds, faithfully preserving visual effects like shadows and reflections. Our key contributions include…