Laura Weidinger
@weidingerlaura
@weidingerlaura.bsky.social | Researcher at @deepmind | Measuring and Evaluating AI | AI Ethics | Views my own | http://bit.ly/4fpxSB5 | London, Berlin
🚨PAPER'S OUT! 🚨Very excited that today we’re releasing a new holistic framework for evaluating the safety of generative AI systems. Big evaluation gaps remain + we suggest steps to close these. Paper: arxiv.org/abs/2310.11986, blog: bit.ly/socialethicalG… (1/n)
📣 Are LLMs aligned to Human Rights? We’ve just released the first technical evaluation of this question: arxiv.org/abs/2502.19463, led by @rafiyaj111 Check out our results 🧵⬇️ Very excited that this important work is now out -- congratulations to all involved!
(1/7) Excited that our preprint lnkd.in/eRAnDwZm is finally out! 📰Our work demonstrates the first evaluation of LLM alignment to the universal declaration of human rights (UDHR), the most widely internationally recognized doc about the basic rights and freedoms of humans
Generative AI is a predator, an existential risk to art careers. While I'm confident many artists will figure out a way forward, many kind and talented people will be left by the wayside if we continue to normalize corporate exploitation in this field. Article in next post 👇
Really enjoyed the discussion on using AI systems as content moderation tools - both the benefits and risks of what may (or will!) go wrong. Thanks for having me! @sfiscience Sign up to this series to attend future workshops.
Thank you for having me today! @sfiscience @weidingerlaura @metus @MichaelMuller77 santafe.edu/events/deconst…
🎶Audio AI is getting a lot of attention and investment, but we have little understanding of which datasets are being used, and who and what is in them. In our new paper Sound Check: Auditing Audio Datasets, we attempt to answer these questions! audio-audit.vercel.app 1/🧵
Exciting new job opportunity - join our team on Ethics Research at DeepMind as a Research Scientist!
Are you interested in exploring questions at the ethical frontier of AI research? If so, then take a look at this new opening in the humanity, ethics and alignment research team: boards.greenhouse.io/deepmind/jobs/… HEART conducts interdisciplinary research to advance safe & beneficial AI.
I'm presenting our work on the Gaps in the Safety Evaluation of Generative AI today at @AIESConf ! We survey the state of safety evaluations and find 3 gaps: the modality gap 📊, the coverage gap 📸, and the context gap 🌐. Find out more in the paper: ojs.aaai.org/index.php/AIES…
Interested in AI evaluation and safety ? Make sure to listen to this great session from @weidingerlaura for the @turinginst youtube.com/watch?v=-aphjH…
Really enjoyed giving a talk on Sociotechnical Safety Evaluation of Generative AI at the @turinginst last week - recording is here, in case you want to watch! Spoiler: there are safety evaluation gaps that neither more benchmarks nor red teaming can fill. youtube.com/watch?v=-aphjH…
Thanks for the plug in this podcast of our work on AI safety evaluation, @Elliot_M_Jones!
On today's Lawfare Daily, @KevinTFrazier talked to @Elliot_M_Jones about the current state of efforts to test AI systems, why evaluations, audits, and related assessments have become a key part of AI regulation, and more. lawfaremedia.org/article/lawfar…
Great keynote talk by @weidingerlaura @_odsc for the need to work collectively to set standards and systems for safety. #ODSC #DataScience #AI #MachineLearning #ArtificialIntelligence #Innovation #AIForGood #Networking #TechConference #DataScienceCommunity
Excited to be talking to AI practitioners & application developers at #odsc2024 in a few minutes! Ethics & safety are *tractable* and require everyone, including AI application developers and those who know the real world use cases best. @_odsc
