Dan Hendrycks
@DanHendrycks
• Center for AI Safety Director • xAI and Scale AI advisor • GELU/MMLU/MATH/HLE • PhD in AI • Analyzing AI models, companies, policies, and geopolitics
Superintelligence is destabilizing. If China were on the cusp of building it first, Russia or the US would not sit idly by—they'd potentially threaten cyberattacks to deter its creation. @ericschmidt @alexandr_wang and I propose a new strategy for superintelligence. 🧵


Does AI deterrence require precise redlines? Nuclear, cyber, and criminal deterrence often have intentional ambiguity. The U.S. maintains a policy of strategic ambiguity on nuclear strikes, keeping open the option of a first strike for undefined conditions. Likewise, the U.S.…
Eric Schmidt says he's read the AI 2027 scenario forecast about what the development of superintelligence might look like. He says the "right outcome" will be some form of deterrence and mutually assured destruction, adding that government should know where all the chips are.
I resisted AI for too long Living in denial Now it is game on @xAI @Tesla @SpaceX
NVIDIA'S CEO: ELON IS A SUPERHUMAN, IT'S JUST UNBELIEVABLE Jensen Huang: "Just to put in perspective a supercomputer that you would build would take normally three years to plan. And then deliver the equipment it takes one year to get it all working. We're talking about 19…
In a new paper about AGI and preventive war, @RANDCorporation colleagues argue that the probability of war is low in absolute terms. But preventive war appears relatively more likely in an attempt to preserve a monopoly on AGI than to prevent one. rand.org/pubs/working_p… /1
Examples of international AI redlines: 1. Intelligence explosion redline. AIs might be able to improve AIs all by themselves in the next few years. The US and China should not want anybody to attempt an intelligence explosion where thousands of AIs are autonomously and rapidly…