Kristian Lum
@KLdivergence
Research Scientist at Google DeepMind | @FAccTConference OG | Past Twitter META, @hrdag & UPenn, UChicago faculty |
I'm hiring! job-boards.greenhouse.io/deepmind/jobs/…
#Facct2025 anyone want to go to the acropolis this afternoon?
look I get that you guys have shticks (and would never begrudge a shtick) but I don’t think you realize how much credibility you’ve burned. Shoulda listened to @KLdivergence!
There’s one existential risk I’m certain LLMs pose and that’s to the credibility of the field of FAccT / Ethical AI if we keep pushing the snake oil narrative about them.
I’ve been trying to find a service that helps you go over various homeowners insurance policies and talk through which is best for you. Surely this must exist, but…
Hi! I'm hiring a Research Engineer to join my team at Google DeepMind for the year. You'd be working with a great, interdisciplinary team on AI evals. Please share if you know anyone who might be interested! boards.greenhouse.io/deepmind/jobs/… Note: this is a fixed-term, 12 month position
Join my colleague @IasonGabriel's team! You may even get to work with me, too, if you do :-)
Are you interested in exploring questions at the ethical frontier of AI research? If so, then take a look at this new opening in the humanity, ethics and alignment research team: boards.greenhouse.io/deepmind/jobs/… HEART conducts interdisciplinary research to advance safe & beneficial AI.
I'm presenting our work on the Gaps in the Safety Evaluation of Generative AI today at @AIESConf ! We survey the state of safety evaluations and find 3 gaps: the modality gap 📊, the coverage gap 📸, and the context gap 🌐. Find out more in the paper: ojs.aaai.org/index.php/AIES…
I miss when this app could be used to share links to your work, find interesting people with shared interests, that sort of thing. I enjoy being on the receiving end of a nonstop unhinged political propaganda machine as much as the next guy, but this is a bit too much.
This is a really cool program for journalists to work with AI. It's @ruchowdh's latest. docs.google.com/forms/d/e/1FAI…
New paper out! Very excited that we’re able to share STAR: SocioTechnical Approach to Red Teaming Language Models. We've made some methodological advancements focusing on human red teaming for ethical and social harms. 🧵Check out arxiv.org/abs/2406.11757
In a world where users rely on advanced AI assistants for a range of tasks across various domains, when would user trust in the technology be justified? Our @FAccTConference paper explores this question. Join our presentation at 11.35 am this morning! Here are 3⃣ key insights.
Congrats to accepted folks but why is this tweet so sinister looking
ICML decisions are out. See you in Vienna.
100%. I spend so much time with my machine learning class, teaching them the difference about using models for inference versus using them for prediction… And then I have to go and tell them that people in machine learning use the word inference to mean make a prediction.😭
Said something mildly controversial on the internet and didn’t get skewered. It was a good day.
I will never get over how AI/ML people use the word “inference”
It finally happened. Someone asked me a question and the answer was something in my dissertation. I’ve been waiting for this moment for almost 15 years.
we're hiring a social science fellow for voting rights at the ACLU! aclu.org/careers/apply/…
The band is getting back together! Tomorrow, I’m joining @wsisaac and so many others I admire on @Google DeepMind’s Ethics team to work on AI evaluation. Exciting times ahead…