Hyung Won Chung
@hwchung27
Research Scientist @OpenAI. Past: @Google Brain / PhD @MIT
Here is my talk at @MIT (after some delay😅) I made this talk last year when I was thinking about a paradigm shift. This delayed posting is timely as we just released o1, which I believe is a new paradigm. It's a good time to zoom out for high level thinking. (1/11)
Woman has lingering sore throat. Doctor tells her to wait it out, ChatGPT suggests an ultrasound, which reveals…aggressive thyroid cancer. I tell all my friends and family - please get a second opinion on medical stuff from AI. It might save your life!
Highly recommend this Stanford lecture video with @_jasonwei and @hwchung27 :) It's one of my favorites on scaling laws and the bitter lesson! Also Hyung's "Don't teach. Incentivize" video: youtube.com/watch?v=kYWUEV… youtube.com/watch?v=3gb-Zk…
When we make rapid progress, we tend to double down on the working paradigm. Bitter lessons suggest that this can be risky for deep learning. When trying to make a new paradigm to work at all, it is often necessary to add structures, e.g. clever modeling ideas. These structures…
I really like this diagram from @_jasonwei and @hwchung27 about how to view the bitter lesson: It's a mistake not to add structure now, it's a mistake to not remove that structure later. We're at the precipice of setting up a huge, powerful RL training run that will define the…
I often hear people say they believe the world will be vastly different in 5 years. Yet, many of them plan life and career as if things won’t change much. Why such discrepancy? Humans have bias towards downplaying major future changes, especially the fast changing ones. Perhaps…
What we're seeing in AI will also happen in other technical fields. While AI’s expected impact is undeniably large, that's not unique; there are other hugely valuable areas, e.g. robotics, longevity. What truly differentiates AI is its rate of progress, specifically how it's…
We have long been accustomed to planning life around a 30-year career. As our healthspan increases, that assumption is increasingly wrong. What would you do differently if your career were, say, 100 years instead of 30? Which options have you subconsciously given up because you…
Differences in model quality are magnified with task difficulty. So if you work on harder problems, you benefit more from AI progress. Good forcing function to work on more challenging problems!
Going back from One to Zero is harder than going from Zero to One.
If you already have plus or pro, it is very easy to try Codex CLI and you get free API credits. Give it a try for our latest codex mini model!
Plus and Pro users who sign in to Codex CLI with ChatGPT can now redeem $5 and $50 in free API credits, respectively, for the next 30 days.
It was a fun (and intense) sprint working on the early versions of our codex (agent) and training the codex (mini-latest) model with @hwchung27 @fouadmatin @rohancalum, optimized for codex (cli)! 😅
We've made some improvements to Codex CLI, based on your feedback: ⬥ Sign in with ChatGPT to quickly connect your API org ⬥ New model, codex-mini, optimized for low-latency code Q&A and editing
Codex CLI keeps getting better. In the long run, I expect that "local" (e.g. Codex CLI) and "remote" (e.g. Codex) coding agents will come together — imagine their combination as a remote coworker who can also look over your shoulder. Excited for the future of programming!
We've made some improvements to Codex CLI, based on your feedback: ⬥ Sign in with ChatGPT to quickly connect your API org ⬥ New model, codex-mini, optimized for low-latency code Q&A and editing
we will name it better than chatgpt this time in case it takes off
"Every scientific era ends when the questions outgrow our unassisted minds—and every new era begins when we forge a tool that makes the impossible routine. AI is not replacing intelligence; it is the latest chapter in our long habit of manufacturing more of it." I had a long…
So excited to share Codex CLI with everyone - fully open-source and available to try with o3 + o4-mini today. Excited to hear what you think!
Meet Codex CLI—an open-source local coding agent that turns natural language into working code. Tell Codex CLI what to build, fix, or explain, then watch it bring your ideas to life.
We’re releasing BrowseComp, which stands for Browsing Competition. 🏎️ Think of it like coding or math competitions — while these contests may not perfectly reflect real-world SWE or mathematical research, they do capture a spark of intelligence. This is THE benchmark we should…
We’re open-sourcing BrowseComp (“Browsing Competition”), a new, challenging benchmark designed to test how well AI agents can browse the internet to find hard-to-locate information. It’s like an online scavenger hunt…but for browsing agents. openai.com/index/browseco…