Balázs Kégl
@balazskegl
Head of AI Research @HuaweiFr.
My fourth conversation with the philosopher @Bardissimo, on his new book project on mysticism, bliss, the importance of phenomenology, the philosophy of time, transcendence, and transformation. Link in the comments. 00:00:00 Intro: making philosophy. 00:03:33 Why mysticism?…
Hinton is getting into embodied AI and realizing the complete change of frame. It's a religious moment. proceedings.mlr.press/v235/paolo24a.…
Geoffrey Hinton says LLMs are immortal. Destroy the hardware, save the weights. Rebuild them later, and “the very same being comes back to life.” But humans don't work that way. We're bound to our substrate. Not even Kurzweil gets a backup.
Blueprint has been a pain in my ass. It's kept me from not focusing on the single thing I’m consumed with: how does the human race survive the rise of super intelligence. Every minute spent dealing with problems like ‘why a supplier shipped us something out-of-spec’ (now…
The daily mindblow from @drmichaellevin
A thought, which caused me to slightly update thoughtforms.life/are-we-too-par… (which I wrote because people often ask me whether we can find, "upwards", goal-directed systems we are part of in the same way I talk about finding collective intelligence in components of which we are made).…
A thought, which caused me to slightly update thoughtforms.life/are-we-too-par… (which I wrote because people often ask me whether we can find, "upwards", goal-directed systems we are part of in the same way I talk about finding collective intelligence in components of which we are made).…
It's a term used by John Vervake. See this thread for it's connection to AI. x.com/balazskegl/sta…
Relevance realization is a wall for AI that is not even on the radar.
I think this is also the majority view in industry.
If you treat AI (or machine learning) as a tool, then it has been useful in many scientific domains for a long time and the latest models are impressive but not yet transformative. If you treat AI as an end in itself, then I can see why the last years feel transformative.
If you treat AI (or machine learning) as a tool, then it has been useful in many scientific domains for a long time and the latest models are impressive but not yet transformative. If you treat AI as an end in itself, then I can see why the last years feel transformative.
I think the threshold for AI to be genuinely transformative in scientific discovery remains "can this model ask interesting questions" rather than "can this model suggest useful answers". And that remains some way off.
This LIGO AI stuff sounds like a cool piece of work but "we really needed the AI" is not what I would conclude from this paragraph
This is definitely happening. Will make asking the right question, learning the right skill, more valuable than the actual skill.
ChatGPT for learning life-changing skills:
“LLMs are formally bullshitting.” Bullshit is a technical term, by the way (Frankfurt, 1986). What’s the difference between lying and bullshitting? A lier knows the truth but says something else, bullshit means speaking with entire disregard for the truth, usually to get one’s…
Awesome, I've been saying this for a while, inspired by @DrJohnVervaeke. LLMs are formally bullshitting, yes. medium.com/@balazskegl/on… A couple of threads that may be interesting: x.com/balazskegl/sta… x.com/NandoDF/status… The connection: when we speak, we have an…
Awesome, I've been saying this for a while, inspired by @DrJohnVervaeke. LLMs are formally bullshitting, yes. medium.com/@balazskegl/on… A couple of threads that may be interesting: x.com/balazskegl/sta… x.com/NandoDF/status… The connection: when we speak, we have an…
🤔 Feel like your AI is bullshitting you? It’s not just you. 🚨 We quantified machine bullshit 💩 Turns out, aligning LLMs to be "helpful" via human feedback actually teaches them to bullshit—and Chain-of-Thought reasoning just makes it worse! 🔥 Time to rethink AI alignment.
🤔 Feel like your AI is bullshitting you? It’s not just you. 🚨 We quantified machine bullshit 💩 Turns out, aligning LLMs to be "helpful" via human feedback actually teaches them to bullshit—and Chain-of-Thought reasoning just makes it worse! 🔥 Time to rethink AI alignment.
Legacy religions emerged as powerful adaptive systems within a particular environment They helped people to discern what matters—and how to live in right relationship to self (others) and the world But the world in which these systems evolved no longer exists And legacy…
Attending #icml25, eager to discuss all things tabular data, structured foundation models, and AutoML! Lots of progress has been made this year, exciting times ahead! Next AutoGluon release is very soon.
Train on math until Dec 31st 1799, validation on what follows.
a guy created a dataset of 50 books from London 1800-1850 for LLM training. no modern bias. it’s actually super cool to see what can be trained on it!