Yutong Zhang
@zhangyt0704
cs master student @Stanford | previously undergrad @UofIllinois
AI companions aren’t science fiction anymore 🤖💬❤️ Thousands are turning to AI chatbots for emotional connection – finding comfort, sharing secrets, and even falling in love. But as AI companionship grows, the line between real and artificial relationships blurs. 📰 “Can A.I.…

Thank you to everyone for your energy and enthusiasm in joining this adventure with me so far!
Are AI scientists already better than human researchers? We recruited 43 PhD students to spend 3 months executing research ideas proposed by an LLM agent vs human experts. Main finding: LLM ideas result in worse projects than human ideas.
Chatbot companions are going to be our society's next moral panic, I suspect. Work by @zhangyt0704 suggests that people turn to them especially when they lack outside social support.
AI companions aren’t science fiction anymore 🤖💬❤️ Thousands are turning to AI chatbots for emotional connection – finding comfort, sharing secrets, and even falling in love. But as AI companionship grows, the line between real and artificial relationships blurs. 📰 “Can A.I.…
There’s a lot of speculation around how AI will change human relationships. To dig into this question, we collect surveys from 1000+ Character.AI users and 400,000+ messages to analyze the relationship between AI companionship and well-being. Preprint:…
AI companions aren’t science fiction anymore 🤖💬❤️ Thousands are turning to AI chatbots for emotional connection – finding comfort, sharing secrets, and even falling in love. But as AI companionship grows, the line between real and artificial relationships blurs. 📰 “Can A.I.…
The rise of AI Companion and how it affects well-being 🤔 Check out work below from @zhangyt0704 @dorazhao9 👇 🤖People with less offline support are more likely to seek companionship from AI 🤖 General interaction links to greater well-being, but seeking companionship is tied…
AI companions aren’t science fiction anymore 🤖💬❤️ Thousands are turning to AI chatbots for emotional connection – finding comfort, sharing secrets, and even falling in love. But as AI companionship grows, the line between real and artificial relationships blurs. 📰 “Can A.I.…
What if LLMs could learn your habits and preferences well enough (across any context!) to anticipate your needs? In a new paper, we present the General User Model (GUM): a model of you built from just your everyday computer use. 🧵
🚨 70 million US workers are about to face their biggest workplace transmission due to AI agents. But nobody asks them what they want. While AI races to automate everything, we took a different approach: auditing what workers want vs. what AI can do across the US workforce.🧵
New #ACL2025NLP Paper! 🎉 Curious what AI thinks about YOU? We interact with AI every day, offering all kinds of feedback, both implicit ✏️ and explicit 👍. What if we used this feedback to personalize your AI assistant to you? Introducing SynthesizeMe! An approach for…
Trust me, try it out - it’s incredibly useful!!
Todo lists, docs, email style – if you've got individual or team knowledge you want ChatGPT/Claude to have access to, Knoll (knollapp.com) is a personal RAG store from @Stanford that you can add any knowledge into. Instead of copy-pasting into your prompt every time,…
Check out 🔥 EgoNormia: a benchmark for physical social norm understanding egonormia.org Can we really trust VLMs to make decisions that align with human norms? 👩⚖️ With EgoNormia, a 1800 ego-centric video 🥽 QA benchmark, we show that this is surprisingly challenging…
LM agents today primarily aim to automate tasks. Can we turn them into collaborative teammates? Introducing Collaborative Gym (Co-Gym), a framework for enabling & evaluating human-agent collaboration! I now get used to agents proactively seeking confirmation or my deep thinking.
People like to talk as it's easy and natural. Now that there are Large *Audio* Models 🔊, which model do users like the most? Introducing Talk Arena🎤: an open platform where users speak to LAMs and receive text responses. Through open interaction, we focus on rankings based on…
New paper: Do social media algorithms shape affective polarization? We ran a field experiment on X/Twitter (N=1,256) using LLMs to rerank content in real-time, adjusting exposure to polarizing posts. Result: Algorithmic ranking impacts feelings toward the political outgroup!🧵⬇️