William J. Brady
@william__brady
Assistant prof @NorthwesternU @KelloggSchool studying emotion, morality, social networks, AI, psych of tech. #firstgen college graduate
👀New preprint! In 3 prereg experiments we study how engagement-based algorithms amplify ingroup, moral and emotional (IME) content in ways that disrupt social norm learning (and test one solution!) w/ Josh Jackson and my amazing lab managers @merielcd & Silvan Baier 🧵👇

Explainable AI has long frustrated me by lacking a clear theory of what explanations should do. Improve use of a model for what? How? Given a task what's max effect explanation can have? It's complicated bc most methods are functions of features & prediction but not true state 1/
Our study led by @ChengleiSi reveals an “ideation–execution gap” 😲 Ideas from LLMs may sound novel, but when experts spend 100+ hrs executing them, they flop: 💥 👉 human‑generated ideas outperform on novelty, excitement, effectiveness & overall quality!
Are AI scientists already better than human researchers? We recruited 43 PhD students to spend 3 months executing research ideas proposed by an LLM agent vs human experts. Main finding: LLM ideas result in worse projects than human ideas.
Thrilled to share that my first first-authored publication is officially out (soon to be in press at Social Cognition)! 🚀 "From Data to Discovery: Unsupervised Machine Learning in Social Cognition" 📄 OSF preprint: osf.io/preprints/osf/…
Awesome write up in Kellogg Insight on our paper published at #CHI2025 this week! insight.kellogg.northwestern.edu/article/are-we…
The language we use signals our identity to others and changes whether they see us as open vs close-minded. Across five preregistered experiments (N = 2,498), we found clear evidence that the use of moral–emotional expressions in social media messages increases intentions to…
I will be leaving HBS and joining Kellogg’s MORS group this summer. I’m very grateful to the amazing HBS community for all the support over the years. Special shoutout to my OB colleagues: it was not an easy decision to leave, and I know I’ll miss each of you dearly.
📄NEW PAPER📄 Ever wondered content people actually pay *attention* to online? Our new research reveals that you likely pay attention to far more varied political content than your likes and shares suggest
New preprint! My entry into the ongoing AI empathy discussion: "Reframing the performance and ethics of 'empathic' AI: Wisdom of the crowd and placebos." osf.io/preprints/psya…
I'm extremely proud of this new paper, out @PsychScience, and extremely fortunate to have worked on it with the inimitable @amandaegeiser and @deborahasmall. We find that when comparing moral wrongs, people are (much) more willing to "scale up" than "scale down" condemnation...
New paper with @IkeMDSilver1 and @deborahasmall just out at Psychological Science: People often compare bad acts to other bad acts. Is it worse to kill two people than to kill one? Should someone who assaulted an adult be punished less than someone who did the same to a child?…
New in @ScienceMagazine: "Large AI models are cultural and social technologies" Working with brilliant colleagues Henry Farrell, Alison Gopnik, and Cosma Shalizi, we challenge the prevailing narrative about AI models as autonomous agents. science.org/stoken/author-…
"Tests of AI empathy typically don't compare a chatbot's cold comfort with the kind of socially embedded care that truly nourishes us. If they did, the chatbots would lose" theguardian.com/commentisfree/… An important challenge for study of AI and empathy. And an empirical question!
Does disgust increase moral condemnation? New meta-analysis (101 studies with 18,180 participants, g = .40) says “yes”. No evidence of publication bias, and exclusion of outliers did not change effect size. Paper by Salvo, Ottaviani, & Mancini, 2025. tinyurl.com/mu8sus2b
Our new piece in Nature Machine Intelligence: LLMs are replacing human participants, but can they simulate diverse respondents? Surveys use representative sampling for a reason, and our work shows how LLM training prevents accurate simulation of different human identities.
Exciting!
The Culture and Morality Lab (🐫) will be at @SPSPnews this week, presenting research at the @HistoricalPsy preconference and the main conference! Come check out these presentations!
If you're going to #SPSP2025, please consider coming to our Friday symposium on false political pessimism. Features talks by @gartoncat, @eriksantoro, and Kristin Laurin!
Excited to be chairing a symposium next week @SPSPnews ft. the amazing @CRobertson500, @Amit_Goldenb, and (virtually) @Sheena_Iyengar. Theme: extreme social media users fool us into thinking they are the norm. Here's an overview of what everyone will be talking about. 1/5