Claude
@henri_nguembi
🇬🇦.🇸🇳~ Working on a Superintelligence in a quest to understand the universe
[1/6] Curious about Muon, but not sure where to start? I wrote a 3-part blog series called “Understanding Muon” designed to get you up to speed—with The Matrix references, annotated source code, and thoughts on where Muon might be going.
Superintelligence is needed if we want to explore the Milky Way and others galaxies. We want a technology that will allow us to go at 99% the speed of light
We won't reach Mars before achieving superintelligence. Elon has acknowledged this and adjusted his priorities.
This paper from @PrimeIntellect deserves much more attention. Can’t wait for Intellect-3

It’s truly a privilege to be able to wake up every morning, see where the latest intelligence frontier is, and help push it a little further.
Il a pas tord, faut être entouré des meilleurs pour être le meilleur, il faut que je rejoigne OpenAI, Anthropic ou xAI
Petit exercice intéressant à faire avec Chat GPT si vous l’utilisez régulièrement depuis des mois. Demandez « sois honnête et sans filtre. Sur la base de ce que tu connais de moi, quelles sont mes défauts et mes faiblesses sur les différents aspects de ma vie ».
If you wanna get shit done, give yourself tight and ambitious deadlines
I’m writing a blog post on Monte Carlo methods, if it does not go live by Sunday 11:59 pm I’m GAY.
2002 was the best year to be born + it is symmetric (there’s only one symmetric year every century).
Tokenization has been the final barrier to truly end-to-end language models. We developed the H-Net: a hierarchical network that replaces tokenization with a dynamic chunking process directly inside the model, automatically discovering and operating over meaningful units of data
Do you think DeepSeek is currently training a new base model or they will just 10x RL on the V3 base model? The wait is long!
First time I agree with you Gary!
Anyone who thinks AGI is impossible: wrong. Anyone who thinks AGI is imminent: just as wrong. It’s not that complicated.
Every ML Engineer’s dream loss curve: “Kimi K2 was pre-trained on 15.5T tokens using MuonClip with zero training spike, demonstrating MuonClip as a robust solution for stable, large-scale LLM training.” arxiv.org/abs/2502.16982
🚀 Hello, Kimi K2! Open-Source Agentic Model! 🔹 1T total / 32B active MoE model 🔹 SOTA on SWE Bench Verified, Tau2 & AceBench among open models 🔹Strong in coding and agentic tasks 🐤 Multimodal & thought-mode not supported for now With Kimi K2, advanced agentic intelligence…
Au lieu de raconter n’importe quoi peut être qu’il fallait regarder le live pour savoir qu’il répond à une question d’une journaliste qui se trouve derrière lui et qui, effectivement lui a demandé s’il soutiendrait une nomination de Trump pour le Prix Nobel de la Paix.
Le président @oliguinguema est en visite aux #USA. Extrait de la séance de travail Pourquoi le président se tient-il de cette façon? 2. Quelle pays y’a t’il entre la #RDC et le #Rwanda ? 3. Le #Gabon souhaite que #Trump ai le Nobel de la paix ? Qui lui a demandé son avis ?
Perhaps the most unintuitive thing about AI today is that AI can simultaneously score 50%+ on Humanity's Last Exam (relatively hard for humans) while only scoring 16% on ARC-AGI-2 (relatively easy for humans). Example v2 task below.