Prafulla Dhariwal
@prafdhar
Head of Multimodal @OpenAI. Co-creator of GPT-4o, GPT-3, DALL-E 2, Jukebox, Glow, PPO. Previously @MIT '17
GPT-4o (o for “omni”) is the first model to come out of the omni team, OpenAI’s first natively fully multimodal model. This launch was a huge org-wide effort, but I’d like to give a shout out to a few of my awesome team members who made this magical model even possible!
Congrats to the GDM team on their IMO result! I think their parallel success highlights how fast AI progress is. Their approach was a bit different than ours, but I think that shows there are many research directions for further progress. Some thoughts on our model and results 🧵
1/N I’m excited to share that our latest @OpenAI experimental reasoning LLM has achieved a longstanding grand challenge in AI: gold medal-level performance on the world’s most prestigious math competition—the International Math Olympiad (IMO).
Today, we at @OpenAI achieved a milestone that many considered years away: gold medal-level performance on the 2025 IMO with a general reasoning LLM—under the same time limits as humans, without tools. As remarkable as that sounds, it’s even more significant than the headline 🧵
1/N I’m excited to share that our latest @OpenAI experimental reasoning LLM has achieved a longstanding grand challenge in AI: gold medal-level performance on the world’s most prestigious math competition—the International Math Olympiad (IMO).
Watching the model solve these IMO problems and achieve gold-level performance was magical. A few thoughts 🧵
1/N I’m excited to share that our latest @OpenAI experimental reasoning LLM has achieved a longstanding grand challenge in AI: gold medal-level performance on the world’s most prestigious math competition—the International Math Olympiad (IMO).
🏅medal performance at IMO using purely natural language reasoning, no tools or internet! was expecting this to take a few more years but the team has made such rapid progress, congrats @alexwei_ @SherylHsu02 @polynoamial and many others at @OpenAI on this amazing achievement!!
1/N I’m excited to share that our latest @OpenAI experimental reasoning LLM has achieved a longstanding grand challenge in AI: gold medal-level performance on the world’s most prestigious math competition—the International Math Olympiad (IMO).
My friends built this super adorable anime camera using the 4o image API—so creative and fun! 📸
We made a physical camera that prints you as an anime character, instantly! with @mirdhaaakanksha
We made a physical camera that prints you as an anime character, instantly! with @mirdhaaakanksha
Image gen is now available in the API! We’re launching gpt-image-1, making ChatGPT’s powerful image generation capabilities available to developers worldwide starting today. ✅ More accurate, high fidelity images 🎨 Diverse visual styles ✏️ Precise image editing 🌎 Rich world…
Introducing OpenAI o3 and o4-mini—our smartest and most capable models to date. For the first time, our reasoning models can agentically use and combine every tool within ChatGPT, including web search, Python, image analysis, file interpretation, and image generation.
Exciting to share what i've been working on in the past few months! o3 and o4-mini are our first reasoning models with full tool support, including python, search, imagegen, etc. it also comes with the best VISUAL reasoning performance up-to-date!
Introducing OpenAI o3 and o4-mini—our smartest and most capable models to date. For the first time, our reasoning models can agentically use and combine every tool within ChatGPT, including web search, Python, image analysis, file interpretation, and image generation.
Special shoutout also to @jhyuxm — incredible work growing and leading our Perception team, and making "thinking with images" a reality!!
“Thinking with Images” has been one of our core bets in Perception since the earliest o-series launch. We quietly shipped o1 vision as a glimpse—and now o3 and o4-mini bring it to life with real polish. Huge shoutout to our amazing team members, especially: - @mckbrando, for…
“Thinking with Images” has been one of our core bets in Perception since the earliest o-series launch. We quietly shipped o1 vision as a glimpse—and now o3 and o4-mini bring it to life with real polish. Huge shoutout to our amazing team members, especially: - @mckbrando, for…
Introducing OpenAI o3 and o4-mini—our smartest and most capable models to date. For the first time, our reasoning models can agentically use and combine every tool within ChatGPT, including web search, Python, image analysis, file interpretation, and image generation.
Beyond our wildest expectations!! 🚀
the chatgpt launch 26 months ago was one of the craziest viral moments i'd ever seen, and we added one million users in five days. we added one million users in the last hour.
the chatgpt launch 26 months ago was one of the craziest viral moments i'd ever seen, and we added one million users in five days. we added one million users in the last hour.
Wow, You can easily add 'behind' text easily with ChatGPT 4o image model with just a prompt! Prompt : Add a title that says "[YOUR_TEXT]" on the background layer, positioned behind the [SUBJECT] so that parts of the letters appear hidden by the [SUBJECT] in the foreground.
Introducing FrostBytes🐧 our fun twist on the iconic Codenames Pictures board game! frostbytes.vercel.app Couldn't find a fun way to play online - so @prafdhar and I built our own for date night!
and to @somayjain16 for a safety spec with a tasteful balance of respect and fun
Also shout out to @jackiemshannon Wayne Change @rohanjamin @mengchaozzz @tomerk11 @_BrendanQuinn_ for working tirelessly these past few months to ship this in ChatGPT and Sora!
Oh my god!!
AI-sa kuch trend ho raha hai, maine suna. Toh socha, what if Ghibli made cricket?
Make iMessage Stickers of yourself with 4o: “turn me into a chibi sticker set”