Aksel
@akseljoonas
Agentic @ Hugging Face 🤗
elon musk saying “I didn’t want to start a company, I just couldn’t get a job” is kinda crazy
Exclusive inside interview of Elon Musk at the @ycombinator AI startup school❗️ He talks about: • FULLY leaving politics to focus on the incoming AI Tsunami • Dropping out during the early internet boom • Advice for young founders • Rare behind-the-scenes SpaceX stories…
Here’s how you train an email agent from scratch with GRPO 👇 1️⃣ Nail a prompted baseline first. It flushes out tool bugs & gives you a benchmark to beat. 2️⃣ When the plateau hits, switch to RL. A 14B model jumped 40%→96% —beating o3 & Gemini—by laser-focusing on one job.
These insights from Manus are a must read if you build agents! - make use of the prompt cache (your agent will be 10x cheaper) - the filesystem = memory module - keep errors in the trace so the LLM can self correct Read this!
After four overhauls and millions of real-world sessions, here are the lessons we learned about context engineering for AI agents: manus.im/blog/Context-E…
Thrilled to finally share what we've been working on for months at @huggingface 🤝@pollenrobotics Our first robot: Reachy Mini A dream come true: cute and low priced, hackable yet easy to use, powered by open-source and the infinite community. Tiny price, small size, huge…
We just released SmolLM3: strong 3B model for fast multilingual long-context reasoning Me and @AymericRoucher were asked to "make-it-agentic" so we cooked up some really nice datasets and training routines. They took the model to the Pareto frontier in function-calling
Introducing SmolLM3: a strong, smol reasoner! > SoTA 3B model > dual mode reasoning (think/no_think) > long context, up to 128k > multilingual: en, fr, es, de, it, pt > fully open source (data, code, recipes) huggingface.co/blog/smollm3