Simon Friederich
@simonfriederich
Philosopher of science at @univgroningen.
The idea that not all variables of a quantum object have sharp values, articulated here, is standard, but it's also problematic, because it leads to the notorious measurement problem (think "Schrödinger's cat"). x.com/Kaju_Nut/statu… Can we avoid it?
Before uncertainty principle ruined everything, we believed that things can have a definite position and a momentum at the same time. In fact, if you knew the position(q) and momentum(p) of a particle, you knew everything about its motion.
Seems like more people should be talking about how the richest companies in the world are explicitly trying to build recursively self-improving AI systems they don't know how to control🤷
Mark Zuckerberg: “We're starting to see early glimpses of self-improvement with the models. Developing superintelligence is now in sight. Our mission is to deliver personal superintelligence to everyone in the world. We should act as if it's going to be ready in the next two…
These numbers are mind-boggling and enraging: the slashing of USAID funding by @elonmusk has already resulted in an ~300,000 deaths, mostly children (HIV, malaria, tbc, malnutrition). Many more deaths expected by the end of the year. All to "own the libs". nytimes.com/2025/05/30/opi…
Some interesting tidbits in this Vance interview (1) He's read ai 2027 (2) In a potential loss of control scenario, If the US admin could be convinced China would pause, then maybe, just maybe they could be convinced to pause too? nytimes.com/2025/05/21/opi…
Here’s what @Sama said about AI in 2015, before starting OpenAI: “WHY YOU SHOULD FEAR MACHINE INTELLIGENCE Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity. There are other threats that I think are more…
Great paper by @simonfriederich and @LeonardDung1! We agree that "solving the alignment problem" is not obviously the solution to reduce existential risk. We also agree that we should not frame xrisk reduction in terms of a Manhattan Project. "One possible outcome is to conclude…
The “Manhattan Project” framing of AI alignment--as a binary, technical challenge that can be solved such that AI takeover is averted--is misleading. It's neither clear-cut nor fully operationalizable. New paper with @LeonardDung1 in Mind and Language: onlinelibrary.wiley.com/doi/10.1111/mi…
The “Manhattan Project” framing of AI alignment--as a binary, technical challenge that can be solved such that AI takeover is averted--is misleading. It's neither clear-cut nor fully operationalizable. New paper with @LeonardDung1 in Mind and Language: onlinelibrary.wiley.com/doi/10.1111/mi…
Kudos, this is really quite funny youtube.com/shorts/_DxM15Z…
Tonight’s presentation was chilling xAI is still booting up in many respects, but can now credibly claim a lead in the AGI race Elon doesn’t know if smarter-than-human AI will be good or bad for humanity, but even if it’s not going to be good, he wants to be alive to see it.…
A great injustice is being committed against the world’s poorest people. It is disguised as climate action, but in truth, it is a new chapter in the Global North’s long history of colonial domination. youtu.be/mq87IM2pib0
This is what happens when climate absolutism trumps everything else: Hundreds of thousands die each year from indoor air pollution caused by wood and charcoal. Meanwhile, rich countries block access to clean-burning LPG. Powerful campaign by @weplanetint -->
Should states be barred from doing anything to address the dangers of AI for kids? Of course not. This poll shows what we all know - Republican and Democrat voters overwhelmingly agree that protecting kids is important. We got this all so wrong with social media (still no…
New working paper (pre-review), maybe my most important in recent years. I examine the evidence for the US-China race to AGI and decisive strategic advantage, & analyse the impact this narrative is having on our prospects for cooperation on safety. 1/5 papers.ssrn.com/abstract=52786…
Most reactions to the impending AI automation of the economy are: (1) denial, or (2) simplistic patches like UBI But the modern social contract is based on states & companies needing human labor. @luke_drago_ and I outline a more robust way forward in a Time op-ed.
It's now 10 years since @sama told @elonmusk "we could structure it so that the tech belongs to the world" "Obviously, we'd comply with/aggressively support all regulation" seems like another whopper
"we do not wish to advance the rate of AI capabilities progress" 😇 - Anthropic like 2 years ago "We want Claude n to build Claude n+1" - Anthropic today
Recent Pew polling on AI is crazy: 1. US public wildly negative about AI, huge disagreement with experts 2. ~2x as many expect AI to harm as benefit them 3. Public more concerned than excited at ~4.5 to 1 ratio 4. Public & experts think regulation will not go far enough 1/
What if truth isn’t universal but local? My latest IAI piece explores how effective field theories reshape our views of science, explanation, and realism. 🧵 iai.tv/articles/a-the… #EffectiveFieldTheory #SciencePhilosophy #Reductionism #Emergence #Realism #EFTs #IAI
Genuinely astonishing that people can see the Google I/O announcements and still claim AI is all hype. People concerned about developments in AI need to get out of this cycle of wishful thinking. AI is capable and is getting better. Pretending otherwise is a fool's game. Focus…
A few months ago, the best LLM scored 5% on the USA Math Olympiad test. Models have been rapidly improving. Today, Google Gemini 2.5 scored 49%, which is better than 75% of the people who took the test (roughly the top 250 students in the USA).
LLMs are blowing through benchmarks faster and faster. Next up, converting capabilities into business value.