Joon Sung Park
@joon_s_pk
CS Ph.D. student @StanfordHCI + @StanfordNLP. Previously @MSFTResearch, @IllinoisCS & @Swarthmore. Oil painter. HCI, NLP, generative agents, human-centered AI
Simulating human behavior with AI agents promises a testbed for policy and the social sciences. We interviewed 1,000 people for two hours each to create generative agents of them. These agents replicate their source individuals’ attitudes and behaviors. 🧵arxiv.org/abs/2411.10109

What if LLMs could learn your habits and preferences well enough (across any context!) to anticipate your needs? In a new paper, we present the General User Model (GUM): a model of you built from just your everyday computer use. 🧵
📢 New policy brief: AI agents that can simulate human behaviors and attitudes can help test ideas in social science. Our latest brief introduces a generative AI agent architecture that simulates the attitudes of 1,000+ real people. Learn more: hai.stanford.edu/policy/simulat…
Todo lists, docs, email style – if you've got individual or team knowledge you want ChatGPT/Claude to have access to, Knoll (knollapp.com) is a personal RAG store from @Stanford that you can add any knowledge into. Instead of copy-pasting into your prompt every time,…
Book announcement: with @stanfordmav, we are publishing "Flash Teams: Leading the Future of AI-Enhanced, On-Demand Work." It's a leadership book synthesizing a decade of @Stanford research on how computing, online platforms, and AI reshape teamwork. Coming October from @mitpress!
New paper: Do social media algorithms shape affective polarization? We ran a field experiment on X/Twitter (N=1,256) using LLMs to rerank content in real-time, adjusting exposure to polarizing posts. Result: Algorithmic ranking impacts feelings toward the political outgroup!🧵⬇️
Crazy interesting paper in many ways: 1) Voice-enabled GPT-4o conducted 2 hour interviews of 1,052 people 2) GPT-4o agents were given the transcripts & prompted to simulate the people 3) The agents were given surveys & tasks. They achieved 85% accuracy in simulating interviewees
Excited to share this new work that shares a new method for creating realistic generative agents that can be used for synthetic social science. Such a pleasure to work with lead author @joon_s_pk
Simulating human behavior with AI agents promises a testbed for policy and the social sciences. We interviewed 1,000 people for two hours each to create generative agents of them. These agents replicate their source individuals’ attitudes and behaviors. 🧵arxiv.org/abs/2411.10109
So thrilled to share our work on this~! Grounding generative agents in real, verifiable behavior is the step that takes this method from producing simulacra to creating simulations that capture the richness of human experiences.
Simulating human behavior with AI agents promises a testbed for policy and the social sciences. We interviewed 1,000 people for two hours each to create generative agents of them. These agents replicate their source individuals’ attitudes and behaviors. 🧵arxiv.org/abs/2411.10109
In a follow-up to the generative agents paper, @joon_s_pk+team demonstrate that anchoring agents in rich qualitative information about an individual enables simulations to replicate an individual's attitudes 85% as well as the individual replicates themselves, + reduces bias
Simulating human behavior with AI agents promises a testbed for policy and the social sciences. We interviewed 1,000 people for two hours each to create generative agents of them. These agents replicate their source individuals’ attitudes and behaviors. 🧵arxiv.org/abs/2411.10109
How close can LM agents simulate people? We interview person P for 2 hours and prompt an LM with the transcript, yielding an agent P'. We find that P and P' behave similarly on a number of surveys and experiments. Very excited about the applications; this also forces us to think…
Simulating human behavior with AI agents promises a testbed for policy and the social sciences. We interviewed 1,000 people for two hours each to create generative agents of them. These agents replicate their source individuals’ attitudes and behaviors. 🧵arxiv.org/abs/2411.10109
This is groundbreaking, super exciting work!
Simulating human behavior with AI agents promises a testbed for policy and the social sciences. We interviewed 1,000 people for two hours each to create generative agents of them. These agents replicate their source individuals’ attitudes and behaviors. 🧵arxiv.org/abs/2411.10109