Michael Huang ⏸️
@michhuan
Reduce extinction risk by pausing frontier AI unless provably safe @pauseai and banning AI weapons @bankillerrobots | Reduce suffering @postsuffering
“This category is deeply disturbing… none of the companies has anything like a coherent, actionable plan… Quantitative guarantees for alignment or control strategies were found to be virtually absent, with no firm providing formal safety proofs or probabilistic risk bounds…”
‼️📝 Our new AI Safety Index is out! ➡️ Following our 2024 index, 6 independent AI experts rated leading AI companies - @OpenAI, @AnthropicAI, @AIatMeta, @GoogleDeepMind, @xAI, @deepseek_ai & Zhipu AI - across critical safety and security domains. So what were the results? 🧵👇
“The decision to ban H20 exports earlier this year was the right one. We ask you to stand by that principle and continue blocking the sale of advanced AI chips to China… This is not a question of trade. It is a question of national security.”
SCOOP Trump freezes export controls to secure trade deal with China & boost the odds of a summit with Xi Jinping. + Security experts concerned at Trump allowing Nvidia to sell H20 to China on.ft.com/4f5y0WM
This story is almost 100% on the risks and race dynamics; skepticism is an afterthought. The inexorable growth of AI x-risk awareness is truly heartening! It's not fast enough. And it's always 2 steps forward, 1 step back. But I've watched this trend for >15 years and it's…
This article from @TheEconomist offers an accurate overview of key dynamics shaping the development of AI today: the risks of the rapid race toward AGI and ASI, the challenges posed by open-sourcing frontier models, the deep uncertainty revealed by ongoing scientific debates and…
MORATORIUM 2.0: Trump says AI will serve American goals but the same tech bros who tried to ban states from regulating AI are shaping this plan. If fed standards override state laws, that’s just the AI moratorium all over again and we already killed it once. We’ll fight it again.
New in the @guardian, @GarrisonLovely asks: "Should we really try to build a technology that may kill us all if it goes wrong?" 👉 "The draw of AGI is strong. But the risks involved are potentially civilization-ending. A civilization-scale effort is needed to compel the…
Artificial general intelligence is not inevitable. My latest for The Guardian challenges one of the most popular claims made about AGI. Among those who believe AGI is possible, it's common to think it's unstoppable, whether you're excited or terrified of the prospect 🧵
There’s at least one thing @SenSchumer and Steve Bannon can agree on: Limiting China's access to Nvidia H20 chips. Top Dems and China hawks wrote letters today opposing the H20 decision. Bannon also not happy about it. The export control debate doesn't fall along party lines...
Top Democrats and a group of China hawks wrote separate letters to Commerce Secretary Howard Lutnick today warning of the risks that come with allowing Beijing to buy the chip. More from @Dareasmunhoz: punchbowl.news/article/tech/m…
NEW: Twenty national security experts and former officials are calling on Commerce Sec. Howard Lutnick to block the sale of H20 chips to China. "China’s next generation of frontier AI will be built on the backs of the H20," warns the coalition. Letter: ari.us/letter-to-secr…
You might have reasonably thought that gradually releasing more powerful AI into the world grows society's immune response. But a concern is that instead society forms a view that AI is unreliable but harmlessly so, and this ossifies before we have actually dangerous systems.
This has been one of the hardest things to convince AI outsiders about: that we are building (and using in the wild!) powerful machines that we do not understand how they work.
🔥🔥🔥 from @mattyglesias - he nails a core reason for concern around AI development If we can't understand what AI will do or why, isn't this a policy problem when AI becomes a lot more capable?
From Anthropic's response to the AI Action Plan. Fully agree.
🔥🔥🔥 from @mattyglesias - he nails a core reason for concern around AI development If we can't understand what AI will do or why, isn't this a policy problem when AI becomes a lot more capable?
What is 'subliminal learning'? A new paper has just been published which finds that AI language models can learn from hidden signals transmitted between each other. Great article on this by @haydenfield in The Verge
How much of industry opposition to the Chip Security Act is b/c reqs are “burdensome” to implement… and how much is that semiconductor companies want to be able to maintain plausible deniability about where their chips end up, so they can keep indirectly selling to China?
Chipmakers are objecting to a proposal to crack down on smuggling, despite lawmakers saying they made changes to a bill to appease industry concerns. More from @BenBrodyDC and @Dareasmunhoz: punchbowl.news/article/tech/c…
Great discussion in @mattyglesias 's mailbag today about loss of control risk.
Anthropic peeps, can you feel the slippery slope you’re on? Anthropic is constantly having to compromise its stated values to stay competitive— at what point will you realize that the whole enterprise of competing in the race “for good” is doomed? It’s just competing in the race.
As I was finishing this piece, I realized it might help explain CEO Dario Amodei's about-face on taking money from Gulf dictators. @kyliebytes broke a great story about a memo Amodei sent Sunday announcing the decision... just days after the class action certification.
JOE ALLEN: STATES MUST BE FREE TO ACT If the federal government drags its feet, the tech oligarchs favored in Trump’s AI Action Plan will steamroll the public. No guardrails, no accountability, no way to stop them. No law will save you once they unleash what they’ve built.
AI godfather Geoffrey Hinton explains why smarter-than-human AI could wipe us out.
BREAKING: The EU Commission has released a mandatory template for AI developers to disclose training data. Unlike the Code of Practice, this is not optional. It could have global fallout, as rights holders abroad might use it to sue over copyright. digital-strategy.ec.europa.eu/en/library/exp…
Yesterday, the European Parliament released a study criticizing the Commission’s withdrawal of the AI Liability Directive and arguing in favor of amending the AILD's original proposal to include a strict liability regime for high-risk AI systems. europarl.europa.eu/RegData/etudes…