Ryan Lowe 🥞
@ryan_t_lowe
full-stack alignment 🥞 @meaningaligned prev: InstructGPT @OpenAI 🦋 @ ryantlowe
Introducing: Full-Stack Alignment 🥞 A research program dedicated to co-aligning AI systems *and* institutions with what people value. It's the most ambitious project I've ever undertaken. Here's what we're doing: 🧵

I guess now is also a good time to announce that I've officially joined @meaningaligned!! I'll be working on field building for full-stack alignment -- helping nurture this effort into a research community with excellent vibes that gets shit done weeeeeeeeeee 🚀🚀
Introducing: Full-Stack Alignment 🥞 A research program dedicated to co-aligning AI systems *and* institutions with what people value. It's the most ambitious project I've ever undertaken. Here's what we're doing: 🧵
Looking for a designer in our network who can look at onboarding flows / social sharing flows and guess what will convert / where people will bail. Paid gig for @meaningaligned + world changing product
Excited for the launch of the position paper that resulted from our Oxford HAI Lab 2025 Thick Models of Choice workshop !
Today we're launching: - A position paper that articulates the conceptual foundations of FSA (…kxsznl.public.blob.vercel-storage.com/Full_Stack_Ali…) - A website which will be the homepage of FSA going forward (full-stack-alignment.ai)
Ever since I started thinking seriously about AI value alignment in 2016-7, I've been frustrated by the inadequacy of utility+RL theory to account for the richness of human values. Glad to be part of a larger team now moving beyond those thin theories towards thicker ones.
It was terrifically energising to work on this position paper. Floored by the ambition and optimism coming out of the @meaningaligned team and by the talented cadre they have assembled for this problem. Kudos @ryan_t_lowe @edelwax @klingefjord, now the real work begins :)
Introducing: Full-Stack Alignment 🥞 A research program dedicated to co-aligning AI systems *and* institutions with what people value. It's the most ambitious project I've ever undertaken. Here's what we're doing: 🧵
I expect @j_foerst will do some of the best FSA-relevant research around, particularly on "win-win AI negotiation". if you're about to do a PhD strongly consider joining him at @FLAIR_Ox !!
The term "AI alignment" is often used without specifying "to whom?" and much of the work on AI alignment in practice looks more like "AI controllability" without answering "who controls the controller?" (i.e. user or operator). One key challenge is that alignment is fundamentally…
Excited to be a contributor to full-stack alignment (FSA) ⭐️ you can read our position paper about the conceptual foundation of FSA here: …kxsznl.public.blob.vercel-storage.com/Full_Stack_Ali…
Introducing: Full-Stack Alignment 🥞 A research program dedicated to co-aligning AI systems *and* institutions with what people value. It's the most ambitious project I've ever undertaken. Here's what we're doing: 🧵
all of the AI alignment efforts are obviously guaranteed to fail because they're trying to do it in isolation, except for approaches like this. This is the real alignment work of the full system @suntzoogway
Introducing: Full-Stack Alignment 🥞 A research program dedicated to co-aligning AI systems *and* institutions with what people value. It's the most ambitious project I've ever undertaken. Here's what we're doing: 🧵
Extremely honored to be working on this project alongside a series of amazing researchers!! This research program is our best attempt at articulating what's needed for AI and institutions that create a future which feels truly great to be alive in Now the real work begins!
Introducing: Full-Stack Alignment 🥞 A research program dedicated to co-aligning AI systems *and* institutions with what people value. It's the most ambitious project I've ever undertaken. Here's what we're doing: 🧵
Check out this great new initiative + paper led by @ryan_t_lowe, @edelwax, @xuanalogue, @klingefjord & the fine folks @meaningaligned! Using rich representations of value we aim to make headway on some of the most pressing AI alignment challenges! See: full-stack-alignment.ai
In 2017, I was working to change FB News Feed's recommender to use “thick models of value” (per the paper we just released). @finkd even promised he'd make Facebook “Time Well Spent”. That effort was thwarted by the (1) market dynamics of the attention economy, (2) the US…
Why do we need to co-align AI *and* institutions? AI systems don't exist in a vacuum. They are embedded within institutions whose incentives shape their deployment. Often, institutional incentives are not aligned with what's in our best interest.
Excited to be part of this exciting vision!
Introducing: Full-Stack Alignment 🥞 A research program dedicated to co-aligning AI systems *and* institutions with what people value. It's the most ambitious project I've ever undertaken. Here's what we're doing: 🧵