Angelina Wang @ACL @angelinawang.bsky.social
@ang3linawang
Asst Prof @CornellInfoSci, @Cornell_Tech | Responsible AI | Prev: @StanfordHAI @PrincetonCS @Berkeley_EECS
Have you ever felt that AI fairness was too strict, enforcing fairness when it didn’t seem necessary? How about too narrow, missing a wide range of important harms? We argue that the way to address both of these critiques is to discriminate more 🧵
Excited to share our #FAccT25 translation tutorial, where we'll explore how to reconceptualize AI measurement as a stakeholder-engaged design practice 🙋🔍🖥️ Next week Thurs 6/26 at 3:15 pm (last day and session - please don't leave the conference early!) 🧵
Alright, people, let's be honest: GenAI systems are everywhere, and figuring out whether they're any good is a total mess. Should we use them? Where? How? Do they need a total overhaul?
Really insightful talk by @ang3linawang on contextual equity #CVPR2025 !
Avoiding race talk can feel unbiased, but it often isn’t. This racial blindness can reinforce subtle bias in humans. Aligned LLMs do the same: when context is unclear, they suppress race and fail to trigger safety guardrails, as if the models are aligned, but blind. See 🧵below!
7/ 📢 Accepted to #ACL2025 Main Conference! See you in Vienna. Work done by @1e0sun, @ChengzhiM, @vjhofmann, @baixuechunzi . Paper: arxiv.org/abs/2506.00253 Project page: slhleosun.github.io/aligned_but_bl… Code & Data: github.com/slhleosun/alig…
Join us at #CVPR2025 Demographic Diversity in Computer Vision workshop tomorrow! 📅 Wednesday, June 11, 9am-6pm 📍 room 213 (main session) + Hall D (poster sessions), the Music City Center We have an amazing lineup of speakers and panelists! Can't wait to meet you all there :)
Very smart framework. More of this!
Have you ever felt that AI fairness was too strict, enforcing fairness when it didn’t seem necessary? How about too narrow, missing a wide range of important harms? We argue that the way to address both of these critiques is to discriminate more 🧵
The US government recently flagged my scientific grant in its "woke DEI database". Many people have asked me what I will do. My answer today in @Nature. We will not be cowed. We will keep using AI to build a fairer, healthier world. nature.com/articles/d4158…
🚨🚨New Working Paper🚨🚨 AI-generated content is getting more politically persuasive. But does labeling it as AI-generated change its impact?🤔 Our research says the disclosure of AI authorship has little to no effect on the persuasiveness of AI-generated content. 🧵1/6
For those who have requested the video, my HAI seminar “Beyond Benchmarks: Building a Science of AI Measurement” is up! I discuss some of @stai_research’s latest work aimed at improving AI measurement foundations towards real-world impact. youtu.be/PkuoEJn6PlA?si…
This is a very practical and useful resource from @ang3linawang! I highly recommend folks read this because my experience is most folks in CS and ML have a warped and seriously impoverished understanding of fairness (e.g. fairness only means match accuracy across groups).
I've recently put together a "Fairness FAQ": tinyurl.com/fairness-faq. If you work in non-fairness ML and you've heard about fairness, perhaps you've wondered things like what the best definitions of fairness are, and whether we can train algorithms that optimize for it.
Appropriate reliance is key to safe and successful user interactions with LLMs. But what shapes user reliance on LLMs, and how can we foster appropriate reliance? In our #CHI2025 paper, we explore these questions through two user studies. 1/7
What makes writing interesting? Can an LLM do it? Does we need a human to feel its worth choosing each word or to shape it w/individual experience? Can it be interesting w/out intention? Does it require inner conflict? I have lots of questions, no answers statmodeling.stat.columbia.edu/2025/02/25/wha…