AI at Meta
@AIatMeta
Together with the AI community, we are pushing the boundaries of what’s possible through open science to create a more connected world.
Today is the start of a new era of natively multimodal AI innovation. Today, we’re introducing the first Llama 4 models: Llama 4 Scout and Llama 4 Maverick — our most advanced models yet and the best in their class for multimodality. Llama 4 Scout • 17B-active-parameter model…

We’re thrilled to see our advanced ML models and EMG hardware — that transform neural signals controlling muscles at the wrist into commands that seamlessly drive computer interactions — appearing in the latest edition of @Nature. Read the story: nature.com/articles/s4158… Find…
🚨 New open-source drop: The AI Alliance is now supporting Llama Stack, a modular AI application framework developed by Meta. Built for portability, developer choice, and real-world deployment. Details ⬇️ 🔗 thealliance.ai/blog/ai-allian…
Meta FAIR recently released the Seamless Interaction Dataset, the largest known high-quality video dataset of its kind, with: 4,000+ diverse participants 4,000+ hours of footage 65k+ interactions 5,000+ annotated samples This dataset of full-body, in-person, face-to-face…
This week we shared an open source AI tool that will help accelerate the discovery of high-performance, low-carbon concrete. See the full story in the thread below. You can also find research artifacts for this project here: 1️⃣ Technical report with details of the model and…
We’re proud to share that we’ve developed an open-source AI tool to design concrete mixes that are stronger, more sustainable, and faster to deploy. The tool uses Bayesian optimization with Meta’s BoTorch and Ax frameworks, and was built in collaboration with @Amrize and…
"Our mission with the lab is to deliver personal superintelligence to everyone in the world. So that way, we can put that power in every individual's hand." - Mark Watch Mark's full interview with The Information as he goes deeper on Meta's vision for superintelligence and…
The Information | TITV | July 15th, 2025 x.com/i/broadcasts/1…
Today Mark announced Meta's major AI compute investment. See his post: facebook.com/share/v/1AnKhQ…

Tired of manual prompt tweaking? Watch the latest Llama tutorial on how to optimize your existing GPT or other LLM prompts for Llama with `llama-prompt-ops`, the open-source Python library! In this video, Partner Engineer Justin Lee demonstrates installation, project setup,…
Take your AI development skills to the next level with our latest course on Deeplearning.ai, "Building with Llama 4", taught by @AndrewYNg and Amit Sangani, Director of Partner Engineering for Meta's AI team. In this comprehensive course, you'll learn how to harness the…
The response to our first-ever Llama Startup Program was astounding, and after reviewing over 1,000 applications we’re thrilled to announce our first group. This eclectic group of early-stage startups are ready to push the boundaries of what’s possible with Llama and drive…

Introducing V-JEPA 2, a new world model with state-of-the-art performance in visual understanding and prediction. V-JEPA 2 can enable zero-shot planning in robots—allowing them to plan and execute tasks in unfamiliar environments. Download V-JEPA 2 and read our research paper…
Our vision is for AI that uses world models to adapt in new and dynamic environments and efficiently learn new skills. We’re sharing V-JEPA 2, a new world model with state-of-the-art performance in visual understanding and prediction. V-JEPA 2 is a 1.2 billion-parameter model,…
Aria Gen 2 glasses mark a significant leap in wearable technology, offering enhanced features and capabilities that cater to a broader range of applications and researcher needs. We believe researchers from industry and academia can accelerate their work in machine perception,…