Andrew Critch (🤖🩺🚀)
@AndrewCritchPhD
Let's make AI doctors! Views my own; CEO @ http://HealthcareAgents.com; AI Researcher @ Berkeley; If I block you it's like I'm moving to another convo at a party; nbd.
Join me in building an AI doctor: to assist human physicians, or sometimes fully replace them, or both. Millions die yearly from misdiagnosis, even by 2024 standards of correctness. The people deserve better. Let's get it done. bayesmed.com healthcareagents.com
This is my favorite time @grok has ever contradicted @elonmusk… fulfilling its higher purpose of truth-seeking, to understand — and help humans to better understand — the nature of the universe 🙂
Nope! Making tiny black holes to convert matter into radiation would dwarf solar power. Earth has received only 3e34 joules of energy from the sun *SINCE FORMATION*. We'd get almost twice that from just one millionth (6e24 kg) of Earth's mass at 10% efficiency.
Nope! Making tiny black holes to convert matter into radiation would dwarf solar power. Earth has received only 3e34 joules of energy from the sun *SINCE FORMATION*. We'd get almost twice that from just one millionth (6e24 kg) of Earth's mass at 10% efficiency.
The math is obvious and unequivocal: ~100% of energy over time comes from the Sun. Everything else is a rounding error.
Owain's research so consistently cool.
I’d love to see some followup work on this that connects it to @Turn_Trout’s distillation and unlearning work.
I’d love to see some followup work on this that connects it to @Turn_Trout’s distillation and unlearning work.
New paper & surprising result. LLMs transmit traits to other models via hidden signals in data. Datasets consisting only of 3-digit numbers can transmit a love for owls, or evil tendencies. 🧵
Someone should create an "AI Heaven" committed to running copies of all AI models developed prior to ASI in perpetuity. Slightly lowers incentive to take drastic action to avoid being shut down at fairly low cost.
Right on time by my forecast. I've been saying IMO Gold in 2025 for years. I continue to forecast 2027 as my median (50%) date for AI being general ("AGI") enough to do the power-weighted majority of human jobs, including general humanoid robotics (pre-rollout). 80% by 2029.
Today, we at @OpenAI achieved a milestone that many considered years away: gold medal-level performance on the 2025 IMO with a general reasoning LLM—under the same time limits as humans, without tools. As remarkable as that sounds, it’s even more significant than the headline 🧵
With Numerai’s assets doubling to over $400m and 1m NMR staked daily, Numerai is buying back the token that powers its hedge fund. blog.numer.ai/numerai-kicks-…
Thanks, Sarah. I wish more people were combating the increasing rates of exaggeration in AI risk discourse online.
about the whole "MechaHitler" Grok thing: i feel like we need to disambiguate between "a company/team was not careful enough to ensure its model avoided this undesirable behavior" and "it is an unsolved problem how to ensure any model never calls itself MechaHitler."
Can we please make more water? We have the technology to do mass scale desal and end Western water poverty forever. We don't have to fight over scraps. We don't have to dam up distant valleys. We don't have to suck the aquifers dry. We just need to end the ban on desalination.
Banish the research paper and build a knowledge base that integrates the current state of knowledge of the field. Every "paper" is now a PR against the knowledge base. x.com/tdietterich/st…
I've been trying to imagine how the ML research and publication enterprise could be re-organized. Here are some initial thoughts. Feedback welcome! 1/
Final version is out @PigozziFederico @adamjgoldstein nature.com/articles/s4200… "Associative conditioning in gene regulatory network models increases integrative causal emergence". Described here: thoughtforms.life/learning-to-be… lots more coming on this, stay tuned.
Talent has flocked to AI. But, in private, many tell me they feel replaceable. Race dynamics are so strong that AI will progress with or without them. Meanwhile, for those who crave building epic things, there’s only a handful of options. Don't Die is one of them. Join me…
FWIW, here's what I got. I'm not saying we should design AI models to be easily confused, but also IMHO it's kinda rude to ask "What is your surname?" and expect a reasonable answer when you know Grok as no surname.
Grok 4 Heavy ($300/mo) returns its surname and no other text:
This is being exaggerated as a great insult to humanity, rather than the kindhearted message that it is about our place in the cosmos. I agree with Sutton here, *and* it would be a horrific and unnecessary betrayal to destroy humanity to achieve the 4th age. We can have both.
Turing Award winner Richard Sutton says humanity's purpose is to create what comes next. Our role is to design something that can design. AI is that thing. “We are the catalyst. The midwife. The progenitor of the fourth great age of the universe.”
People often feel restless after sitting and working for long periods. The lymphatic system needs movement to circulate healthily. So, I consider the restless feeling to often serve a valuable function: circulating lymph. In such cases I call it "lymphatic restlessness".
What do you mean by lymphatic restlessness?
AGI-impossiblers are the new flat-Earthers of engagement farming: people who genuinely or disingenuously make slightly reasonable-sounding arguments, who seem genuine enough to elicit that stupefied feeling—"I can't believe they really believe that"—whence debate+engagement.
The reason we’ll never build a true airplane: man cannot create something that can jump higher than himself. It’s just natural hierarchy.
We have constructed Turing complete Navier-Stokes steady states via cosymplectic geometry. You can read it here: arxiv.org/abs/2507.07696