Connor Leahy
@NPCollapse
CEO @ConjectureAI - Ex-Head of @AiEleuther - Leave me anonymous feedback: http://bit.ly/3RZbu7x - I don't know how to save the world, but dammit I'm gonna try
RELEASE: THE COMPENDIUM Several reckless groups are racing towards AGI, and risking the extinction of humanity. What is AGI? Who are these people? Why are they doing this? And what can we do about it? We answer these questions in The Compendium. 1/17

I dislike dunking on randos. But I heard the same argument from an SF org CEO. "People caring about the happiness of humans are selfish. They must instead care about future counterfactual non-human sentient beings. But it's ok. People don't have enough power to matter."
Are you a parent? Love your children? Then you are selfish and it is actually good that you & your kids will be murdered by AI (+everyone else). ~10% of AI researchers working hard to make such AI reality basically think the same.
Holy shit these quotes from Congress are absolutely eye-popping: "...this week lawmakers demonstrated a level of AGI situational awareness that would have been unthinkable just months ago. •“Whether it’s American AI or Chinese AI, it should not be released until we know it’s…
I agree.
My current rough sense of history is that the last "moral panic" about social media turned out to be accurate warnings. The bad things actually happened, as measured by eyeball and by instrument. Now we all live in the wreckage. Anyone want to dispute this?
lol, lmao
SCOOP: Leaked memo from Anthropic CEO Dario Amodei outlines the startup's plans to seek investment from the United Arab Emirates and Qatar. “Unfortunately, I think ‘no bad person should ever benefit from our success’ is a pretty difficult principle to run a business on.”
lmao crazy result, in case you somehow thought AIs weren't weird enough yet
New paper & surprising result. LLMs transmit traits to other models via hidden signals in data. Datasets consisting only of 3-digit numbers can transmit a love for owls, or evil tendencies. 🧵
Max is a good guy and nails it in this podcast, really does a good job of explaining the basics in great detail and accuracy. Great show, give it a watch!
NEW EPISODE DROPPED... AI safety advocate @MaxWinga joins me to break down the reckless billionaire AI race, the path to superintelligence, and why humanity may only have 5 years left. Watch – youtu.be/hAfPF-iCaWU Links to full episode in 🧵
> "I would prefer my child to live" > "Selfish tbh" I know it sounds hyperbolic to claim these AI freaks want to wipe out humanity, but they say so themselves.
Truly stunning instance of ironic prophecy (An OAI model that got gold on IMO was announced about 5 hours after this post)
So, all the models underperform humans on the new International Mathematical Olympiad questions, and Grok-4 is especially bad on it, even with best-of-n selection? Unbelievable!
People now spend most of their free time and attention on the Internet. Giving free-reign to Big Tech there has caused the decline of many load-bearing institutions beyond journalism. This itself has caused the rise of many bad ideologies beyond woke.
In retrospect one of the things that allowed wokeness to become so powerful, at its peak, was the decline of journalism as an industry. The kind of people who undertook its institutional capture in 2010 mostly couldn't have gotten hired in 1980.
On the Clearer Thinking podcast, Spencer Greenberg and I had an interesting discussion about AGI, what good institutions should look like, and under what conditions (if any) the flow of information should be restricted. Hope you'll give it a listen! podcast.clearerthinking.org/?ep=271
Speaking of Chernobyl analogies: Building an AI that searches the Internet, and misbehaves more if more people are expressing concern about its unsafety, seems a lot like building a reactor that gets more reactive if the coolant boils off. This, in the context of Grok 4 Heavy…
Grok 4 Heavy ($300/mo) returns its surname and no other text:
The recent Anthropic blackmailing paper has gotten quite some attention. This is a nicely put together video explaining the paper in a balanced way. I recommend giving it a watch if you haven't read the paper yet! youtube.com/watch?v=eczw9k…
Great essay
Ideologies are limited. They focus on a narrow set of values ("Social Justice", "Freedom", "Order") and declare them supremes. These limitations conflict with reality, and how people react to them is very informative. Link a reply to a full essay about this.
I totally agree with this observation, but think it's even worse than this. It's not just that humanism is lacking in AI, it is lacking in shockingly many areas across life. We are not on track for a good world if that continues to be the case.
I’m struck by how profoundly non-humanistic many AI leaders sound. - Sutton sees us as transitional artifacts - x-risk/EA types reduce the human good to bare survival or aggregates of pleasure and pain - e/accs reduce us to variables in a thermodynamic equation - Alex Wang calls…
Ex-OpenAI researcher Steven Adler speaks out about what's really happening inside OpenAI. Our full conversation with @sjgadler, who led OpenAI's dangerous capability evaluations. We discuss alarming AI tests, the shocking state of internal safety standards, and more: 01:01 -…
Thanks to @Siliconvos recent video made in partnership with us, over 2,000 citizens have used our tools to contact their representatives about need to regulate powerful AI. Siliconversations has just made a new video about this success! [see the full video below]
Great evening at the @inferencemag debate on the potential for an intelligence explosion. Excellent contributions from @tylercowen, @TomDavidsonX, @NPCollapse and Mike Webb. Bravo @jackwiseman_ for organising and moderating!