TRAILS
@trails_ai
The NSF-NIST Institute for Trustworthy AI in Law & Society focuses on broad participation in AI, deep technical research, and informed governance of AI systems.
A new round of $750K in seed funding has been awarded to faculty and students who are advancing trustworthy AI at all four of TRAILS’ academic institutions: @UofMaryland, @GWtweets, @MorganStateU, and @Cornell. From disaster response to education, copyright law, and AI red…

I'm excited to share that GWU is hosting a World Bank Symposium on AI & the Future of Human Capital. For those not steeped in the development, Human Capital == people’s skills, knowledge, and health worldbank.org/en/events/2025… Some flexibility in the deadline may be possible.
Supported by TRAILS, Huaishu Peng (@huaishup), Ge Gao, Jiasheng Li (@jsli_21) and @CornellInfoSci's Malte Jung are building wearable AI to turn nonverbal cues—like nods and gestures—into touch and audio signals to improve communication between blind individuals and their…

You cannot really train all these models to cater to different preferences. Can you have one model that caters to all? @furongh unveils a technique to customize AI models on-the-fly to user goals, reducing the computational cost of tailoring AI systems to individual needs.
We had our first human–computer cooperative AI tournament at the UMD. Key takeaways: 1) computers are getting better at trivia 2) they still suck at calibration 3) our teaming mechanic kept the games competitive and mostly fun (at least that’s what the players said).
Excited to speak at the Workshop on Computer Vision in the Wild @CVPR 2025! 🎥🌍 🗓️ June 11 | 📍 Room 101 B, Music City Center, Nashville, TN 🎸 🧠 Talk: From Perception to Action: Building World Models for Generalist Agents Let’s connect if you're around! #CVPR2025 #robotics…
Do chatbots provide sound advice to stop smoking? Most of the time they do, says a TRAILS study that analyzed 3 chatbots. But sometimes answers include errors or misinformation, suggesting the need for improvements to these AI-powered tools. Learn more: trails.umd.edu/news/anti-smok…
An interview w/@FeiziSoheil by @TheKellyOGrady on AI text detection was featured @cbssaturday. Feizi explains how current AI detection tools are often unreliable, & should be used with caution in high-stakes settings involving academic integrity. Watch: go.umd.edu/1zt5
Do you like trivia? Can you spot when AI is feeding you BS? Or can you make AIs turn themselves inside out? Then on June 14 at College Park (or June 21 online), we have a competition for you.
🚨 Introducing RESTOR: Knowledge Recovery in Machine Unlearning: openreview.net/pdf?id=BbwlJpN… In the era of large language models (LLMs), the right to be forgotten has never been more critical. As models absorb vast amounts of web-scale data, they risk memorizing misinformation,…
🥳Excited to share our paper RESTOR: Knowledge Recovery in Machine Unlearning is accepted to #TMLR! We introduce a benchmark where unlearning means not just forgetting target data, but recovering the behavior of an ideal model that never saw them. Paper: openreview.net/pdf?id=BbwlJpN…
1/ 🌎 Real-world datasets are messy—some domains dominate while others barely show up. This imbalance leads traditional #RLHF (like #GRPO) to favor prevalent domains, hurting fairness and generalization. So, how do we fix this? 🔥 Our new method, 🪩DISCO 🪩, elegantly tackles…
📢 New pub! “Actionable Insights Regarding Cyberbullying Among College Students” is out now by @springerpub. Even better? It’s the first paper with “Dr.” in front of @jewelsfromjuana ’s name. 🎓 connect.springerpub.com/content/sgrvv/…
Can you spot when AI bluffs?🤖 Can you outguess AI—or work with one to dominate trivia?🏁 🏆 We are hosting the first Human–AI coop trivia (Quizzing) competition. 🎲Play, 🛠️build, or ✍🏼write questions... ..and win prizes 🎁. 🥳 It’s fun, free, and happening this June 🧠🤖👇
Hal Daumé III (@haldaume3) & Katie Shilton were part of a group that won @UofMaryland's Invention of the Year Award in the social innovation category for their technology that can teach students in any discipline about AI & information literacy. Read more: trails.umd.edu/news/trails-le…

A recent @nytimes article quoted Soheil Feizi (@FeiziSoheil) and cited his study that found A.I.-detection services erroneously flag human-written text as A.I.-generated about 6.8% of the time. That same study has also been accepted to #ACL2025. nytimes.com/2025/05/17/sty…
After an overnight delay in Denver on my way from Albuquerque (#NAACL2025) to Houston, I’ve finally arrived at the beautiful #RiceUniversity campus for the 2nd Texas Colloquium on Distributed Learning (TL;DR). Looking forward to presenting my talk: “Test-Time Thinking for Trust:…
🌏 Spring AI Tour: From Singapore to New Mexico to Texas! Catch me & the team at: 🇸🇬 ICLR 2025 (Apr 24–28, Singapore) 📍 4 main papers, 5 workshop papers, 1 workshop organized, 2 keynotes, 1 talk, 1 panel. Topics: 🛡 AegisLLM @ BuildingTrust 🎭 AdvBDGen (stealthy backdoors) 🌐…
TRAILS Director Hal Daumé (@haldaume3) joined "AI or Not" the podcast with Pamela Isom to talk about his shift from technical AI research to its societal impact—touching on governance, copyright, and the need for stronger regulations. go.umd.edu/1yvy