Loquacious Bibliophilia
@LocBibliophilia
In a time of machines, I'm pro-human. Humanity is great and we ought to keep it around. d/acc+ for human relevance
Incredible new network! Follow, and help us get to a future that our children deserve!
Follow and share The AI Risk Network: Instagram instagram.com/theairisk/Link… LinkedIn linkedin.com/company/the-ai… TikTok: tiktok.com/@the.airisknet… Facebook: facebook.com/profile.php?id… Help us spread the word!!!
Biological longevity or digital immortality? YAMPOLSKIY: “I really hope for biological option. This is definitely going to preserve our consciousness. All the other alternatives, uploading, merging with tech, may end up creating a clone of you, not really keeping you around.
Joe Allen warns that the vision behind AGI is not just about utility. It’s about wiping out human purpose entirely: “They’re not talking about making people better. They’re talking about the greater replacement. Replacing people altogether with machines.” —@JOEBOTxyz
Sooner or later we will all experience a similar crisis of self, particularly those of us who find meaning and purpose in our vocation. This can be existentially harrowing, so be kind to each other
the openai IMO news hit me pretty heavy this weekend i'm still in the acute phase of the impact, i think i consider myself a professional mathematician (a characterization some actual professional mathematicians might take issue with, but my party my rules) and i don't think i…
"It would also be dangerous to treat the possibility of AGI like any “normal scenario” in the national security world." Surprisingly good article in FA, including loss of control risks.
“Any national security strategy that fails to grapple with the potentially transformative effects of artificial general intelligence will become irrelevant,” argue Matan Chorev and Joel Predd. foreignaffairs.com/united-states/…
Joe Allen and Steve Bannon were warning about the dangers of AI long before ChatGPT was introduced to the public. I had no idea AI was as advanced as it was when it was made public. I thought they were maybe decades too early with their warnings but they were right on.
JOE ALLEN: I just pray to God the Trump administration doesn’t close the U.S. borders only to open a gate to hell and unleash AI upon us.
Most AI firms are unprepared for the dangers of human-level systems. @FLI_org and @tegmark have been right to call this out. My work shows alignment must be baked into the architecture 𝘰𝘳 𝘪𝘵 𝘯𝘦𝘷𝘦𝘳 𝘴𝘤𝘢𝘭𝘦𝘴. #agi #superintelligence theguardian.com/technology/202…
Something people miss sometimes comparing 'synaptic weights' to neural net weights is the idea of range vs resolution. Neurons have a limited range of strength of connection, the strongest synapse is probably less than 1000x than the weakest. But the resolution is immense.
ROMAN YAMPOLSKIY: “As long as we don’t create general superintelligence, the future can be very bright.” “But my research shows you cannot indefinitely control superintelligence. If we build it, it will probably take us out.” “It would come up with something much more…
Another week, another member of Congress announcing their superintelligent AI timelines are 2028-2033:
Please note, we're not able to reproduce the 41.8% ARC-AGI-1 score claimed by the latest Qwen 3 release -- neither on the public eval set nor on the semi-private set. The numbers we're seeing are in line with other recent base models. In general, only rely on scores verified by…
"Agency without autonomy is hollow: sophisticated choice-execution emptied of genuine purpose. Autonomy without agency is impotence: a clear vision of what matters but no power to pursue it. Both are necessary, but autonomy provides the foundation that transforms choice-execution…
New essay: In defense of self-direction What Tocqueville, Aristotle, Humboldt, and Mill understood about human autonomy—and why the highest goods can’t be delivered, only pursued. 🧵
Even @politico heard that @JoinFAI is building conservative science policy
"Unfortunately, having failed to prevent that dynamic at the collective level, we're now stuck with it as an individual company." I'm not a fan of Gulf State investments, but Anthropic is much more sympathetic in these leaks than in most of their public communications.
SCOOP: Leaked memo from Anthropic CEO Dario Amodei outlines the startup's plans to seek investment from the United Arab Emirates and Qatar. “Unfortunately, I think ‘no bad person should ever benefit from our success’ is a pretty difficult principle to run a business on.”
the argument that LLM watermarking is hopeless really irked me for a long time but @OwainEvans_UK's subliminal learning results today demonstrate a big hole in the argument
1/5 New preprint w @_hanlin_zhang_, Edelman, Francanti, Venturi & Ateniese! We prove mathematically & demonstrate empirically impossibility for strong watermarking of generative AI models. What's strong watermarking? What assumptions? See blog and 🧵 harvard.edu/kempner-instit…
It is not business as usual.
The collective cognitive dissonance re Business as Usual, right now in 2025, is fascinating. Esp in light of who, and what, will be telling the story in 5, 10, 20 years from now. Make your own records of things. We need to write the way to the future, but documenting your…
Our journey to the UN-sponsored "AI For Good" summit was a head trip. We spoke to glitchy bots, real-life cyborgs, AI experts who say machines may be conscious, those who say it's all just wires and logic—and in both camps, those who believe they're potentially murderous. 🧵/1
I think Anthropic's totally right btw that "we should all ban X" is consistent with "if you do X, we will also." Normally AI companies aren't so direct about commitments they'll *conditionally* make, if and only if others do too The comms quote is ... not good though