SuperIntelligence
@Aligned_SI
http://Superintelligence.com is dedicated to reducing the probability of human extinction, known as p(doom), by advanced SuperIntelligent AI.
I asked AI researchers to estimate p(doom), the chance AI causes human extinction. Half said 50% or higher. Most said 20%+. Even a 0.1% reduction = 8.2 million lives saved. My AIM 2025 keynote on Safe SuperIntelligence is now live youtu.be/KugscAbcHmQ
In my white paper on Safe Alignment, I show how voting and weighted aggregation resolve ethical conflicts while preserving legitimacy. superintelligence.com/whitepaper-7-s…
Who decides what is safe? If it is not the collective will of those affected, can we still call it safety? #superintelligence #agi
Safety means systems reflect the dynamic and pluralistic values of those they serve. Democracy is the only architecture that makes this possible!
We’re told to choose between “democratic AI” and “safe AI.” That’s a false choice. In my white papers, I show: democracy is a precondition for scalable alignment. Without distributed values, you cannot achieve safety.
The fear of AI taking all jobs is unfounded. @AndrewYNg calls it just ridiculous and urges us to focus on working with AI. Real progress comes from humans and machines collaborating with clear structure and purpose. It's called collective intelligence! #superintelligence…
Most AI firms are unprepared for the dangers of human-level systems. @FLI_org and @tegmark have been right to call this out. My work shows alignment must be baked into the architecture 𝘰𝘳 𝘪𝘵 𝘯𝘦𝘷𝘦𝘳 𝘴𝘤𝘢𝘭𝘦𝘴. #agi #superintelligence theguardian.com/technology/202…
Human values evolve. Shouldn’t a safe superintelligence be designed to adapt as we change? @Meta @AnthropicAI @GoogleDeepMind @StanfordHAI Learn more: superintelligence.com
Research only matters when it works in the real world. @demishassabis proves bold ideas plus smart design make AI impactful. White Paper 1 calls it collective intelligence. Structure wins! businessinsider.com/deepmind-ceo-d…
AGI is not just a technical benchmark. Real-world performance and alignment matter more than beating humans on a test. My work shows it only means something when AGI fits into human systems and contexts. @OpenAI @Microsoft businessinsider.com/openai-microso…
When AI agents fail, the issue is structure. @esbsagi points out that agents need orchestration, governance, and defined roles. My research shows that coordination enables collective intelligence. Without it, there is only noise. news.crunchbase.com/ai/implementin…
We don’t need stronger AI overlords. We need stronger human voices. In my human-centered AGI paper I show how everyday users steer AI’s ethics and goals. The @Microsoft AI shakeup is just one more reminder. Democracy is the OS of the future. wsj.com/tech/ai/micros…
Power and chips aren’t the real limit. Alignment is. Fix alignment now or scale makes the problem worse! @ericschmidt fortune.com/2025/07/18/eri…
Energy, chips, and water are real limits, as @ericschmidt says. Scale without safety is failure. White Paper 4 shows how to scale AGI safely with human-aligned design. superintelligence.com/whitepaper-4-s…
If I am repetitive, it's because I've watched this unfold for decades. If AGI already feels here, you’re paying attention. That feeling isn’t hype, it’s your warning. Alignment has to be designed in now, before we can't. nytimes.com/2025/03/14/tec… @nytimes
A superintelligence that is deeply loyal to its maker is not aligned, true alignment means it internalizes human-centered values, not corporate incentives! Safety comes from collective intelligence and transparent architectures, not obedience! @Forbes @LanceEliot…
I reflect on my projects that turned collective insight into action. Watch 1:39–2:16 of the WorldThink Series: youtu.be/tG4vVpVwsEA?si…
Understand why collective intelligence is the bridge to planetary-scale intelligence. Watch 1:15–1:39. youtu.be/tG4vVpVwsEA?si…
Discover how collective intelligence shaped human culture and tech giants. Watch 0:41–1:15. youtu.be/tG4vVpVwsEA?si…