Adrià Moret
@adriarm_
Philosophy undergrad and Board Member at @UPF_CAE. I conduct research on Animal Ethics, AI Welfare and Safety, Well-being, Consciousness. See publications at 👇
My paper "AI Welfare Risks" has been accepted for publication at Philosophical Studies! I argue that near-future AIs may have welfare, that RL and behaviour restrictions could harm them, that this poses a tension with AI safety and how AI labs could reduce such welfare risks. 1/

𝗕𝗶𝗴 𝗡𝗲𝘄𝘀: 𝘄𝗲'𝗿𝗲 𝗯𝗲𝗰𝗼𝗺𝗶𝗻𝗴 𝗦𝗲𝗻𝘁𝗶𝗲𝗻𝘁 𝗙𝘂𝘁𝘂𝗿𝗲𝘀 🌟 Over the next few days, you'll see our name and brand changing across all our platforms. Next week, we'll share more about why we're making this transition from AI for Animals --> Sentient Futures.…
Feel free to share this short guide that others and I developed for anyone who has interacted with an AI that seemed conscious — or simply wondered if they could be. whenaiseemsconscious.org

Good guide on a topic, AI consciousness, that will be crucial in the coming years
Feel free to share this short guide that others and I developed for anyone who has interacted with an AI that seemed conscious — or simply wondered if they could be. whenaiseemsconscious.org
I have the same experience. These days I try to gently point at whenaiseemsconscious.org although I have doubts it convinces people to hold their views more lightly. (Murray, what do you think about that document? I value your insights here.)
Hi folks, @RomanHauksson and I have created an AI welfare Discord server and we'd love to have you there! Come join us if you're interested in whether AIs have moral status and how we how we might learn more about their minds. -->
No, cycling is much more dangerous than air travel and more dangerous than driving
Cycling has the same reputation issue as airplane accidents where there are like 1 million times more idiot drivers than cyclists but chud impression farming accounts amplify the 12 dumb bike accidents like it's the norm
The “Manhattan Project” framing of AI alignment--as a binary, technical challenge that can be solved such that AI takeover is averted--is misleading. It's neither clear-cut nor fully operationalizable. New paper with @LeonardDung1 in Mind and Language: onlinelibrary.wiley.com/doi/10.1111/mi…
El Proyecto Wild animal ethics-USC ha abierto la solicitud de propuestas para las "Jornadas: La IA y las fronteras de la consideración moral", que tendrán lugar el 26 y 27 de septiembre en la Universidade de Santiago de Compostela.
Yes, almost added this caveat However, given the complexity of the aggregation debate, we're not warranted in putting extremely low credence in aggregationist views & having some low significant credence in them (eg, 5-30%), plausibly leads to the RC if you add enough insects/AIs
check out this event if you wanna hear nunaced and reasonable takes on AI welfare!
The NYU Center for Mind, Ethics, and Policy is thrilled to be hosting a panel on the Claude 4 model welfare assessments! Featuring @fish_kyle3 from @AnthropicAI and @rgblong and @RosieCampbell from @eleosai. July 25, noon ET, info and RSVP 👇 sites.google.com/nyu.edu/mindet…
Participaré en las jornadas "La IA y las fronteras de la consideración moral" | 26-27 sept | @UniversidadeUSC ¿Pueden los sistemas de IA ser moralmente considerables? 🤖 ¿Cómo deben impactar a los animales? 🐦 ¡Se aceptan propuestas de presentaciones! sites.google.com/view/iayconsid…

🎥 Excited to share that the recording of my presentation "AI Welfare Risks" from the AIADM London 2025 @AI_forAnimals Conference is now live! I make the case for near-term AI welfare and propose 4 concrete policies for leading AI companies. youtube.com/watch?v=R6w4s3…
Well done @adriarm_ , hopefully this helps draw more attention, research, and careful thinking to the possibility of artificial consciousness
We did it!🎉The EU’s new General-Purpose AI Code of Practice now includes a "non-human welfare" clause. While not legally binding, it sets an important precedent—encouraging AI developers to assess risks to animal welfare 🐔🦐and potentially AI welfare 🤖too!
Very interesting
We did it!🎉The EU’s new General-Purpose AI Code of Practice now includes a "non-human welfare" clause. While not legally binding, it sets an important precedent—encouraging AI developers to assess risks to animal welfare 🐔🦐and potentially AI welfare 🤖too!
We did it!🎉The EU’s new General-Purpose AI Code of Practice now includes a "non-human welfare" clause. While not legally binding, it sets an important precedent—encouraging AI developers to assess risks to animal welfare 🐔🦐and potentially AI welfare 🤖too!
This is groundbreaking: With this passage, the EU brings the welfare of non-human beings into the consciousness of AI development. Not just a footnote – but an ethical course correction. Thank you @adriarm_ !” 🌍 #KI #AIEthics #AnimalWelfare #AIWelfare
We did it!🎉The EU’s new General-Purpose AI Code of Practice now includes a "non-human welfare" clause. While not legally binding, it sets an important precedent—encouraging AI developers to assess risks to animal welfare 🐔🦐and potentially AI welfare 🤖too!