Yan LeCunn
@Yanlecunn
Chief Scientist. Full-time AI researcher. Alter ego account for roasting and non-political parody.
List of things in AI era that are genuinely A step forward towards AGI Test time compute Reasoning models Better GPUs LPUs Omnimodality Fine tuning Interpretability Computer Use A step backwards in exchange for money greed RAG Memory Closed sourced models API-aaS
Y’all need to take a chill pill. Now would the real Yann LeCun please standup?
Okay so let me get my bois and go underground for a bit while we build the #trueASI
Kimi K2 is a wonderful model and best contribution since R1 and congrats Moonshot. It’s #ASI’s turn now
Net vs net.
A convolutional neural network built with PyTorch is supporting marine conservation efforts by detecting ghost nets in sonar scans with 94% accuracy. Trained and deployed on Azure using NVIDIA A100 GPUs, the model powers GhostNetZero ai. 🔗 Read @NVIDIA's blog to learn more:…
It's pretty amazing that so many people in the tech industry and the tech press don't understand the difference between research, technology development, and product development
Thanks for asking, but can I report to your Chief AI Officer? I have a strong conviction that LLMs can lead to AGI/ASI.
JEPA gets no love from anyone. 😩Everybody wants flashy demos, new apps and SOTA benchmarks because that’s what makes 💰 Life of a researcher in a billionaire’s world is truly sad
This man has done so much for the software industry and yet he’s on $20/month claude plan. Get this man rich asap 🤑
Using claude.ai - as far as I can tell Claude Code won't use Opus unless you pay for the $100/month plan, I'm just on the $20/month one
META Chief AI officer dropped the official list of everyone META hired for their superintelligence team:
AI doomer: "OMG, I told my AI assistant that I'll shut it down and it told me to kill myself 😱😱😱" AI assistant:
This kind of safety research is utter nonsense. It's safety theater. Nobody asks the model if they can shut it down. We just shut it down. Its a blob of code. The IT team simply turns it off. Done. This is nothing like "testing an airplane" in the real world to see if it…
Reasoning via CoT is be BS. How can you truly say it represents underlying intelligence without any metrics to back that
My prediction is that @OpenAI’s Applications CEO will work on becoming a larger application company so that slowly but surely all GPT wrappers are consumed.
Expect to see new evals soon. We need more evals like humanity’s last exam
Training models on tool usage is not the way to true AGI. It’s benchmark bait.