NomoreID
@Hangsiin
AI/ML Developer
Two years ago, I realized that AI would have a genuinely significant and transformative impact, and I wondered how I could contribute by helping people understand this shift. Ultimately, within the limits of my own capabilities, I actively shared a lot of information online and…
The IMO Gold team's podcast (next week!) This podcast will surely be great!
And @alexwei_ and @SherylHsu02!
New type of hallucination (Still, compared to past, it looks promising overall!)
Gemini has unlocked a new capability: conversational image segmentation 🖼️ This enables new use cases that were previously not possible, furthering Gemini’s SOTA image understanding capabilities! 🧵
After a long time, write an interesting paper again. Hope can release it soon.
🚀Introducing Hierarchical Reasoning Model🧠🤖 Inspired by brain's hierarchical processing, HRM delivers unprecedented reasoning power on complex tasks like ARC-AGI and expert-level Sudoku using just 1k examples, no pretraining or CoT! Unlock next AI breakthrough with…
'no tools, no lean.' sounds great!
I cannot emphasize this enough: the system use no tools, no lean — text in, text out. And the more we scale inference compute, the more accurate the proofs get, while still reading like natural text.
I still remember the time when rl for open-ended problems was considered very difficult. It seems that, fundamentally, these challenges are now being resolved.
RL and search working on open-ended problems is incredibly exciting! In some sense, IMO is just a teaser but nevertheless a perfect testbed for these techniques. Very excited about how the frontier is going to be pushed in all the non-verifiable domain in the upcoming months.
In a significant advance over our results with AI last year, Gemini was given the same problem statements and time limit - 4.5 hours - as human competitors, and still produced rigorous mathematical proofs. It gained 35 points out of a total 42 - equivalent to earning a gold…
In a significant advance over our results with AI last year, Gemini was given the same problem statements and time limit - 4.5 hours - as human competitors, and still produced rigorous mathematical proofs. It gained 35 points out of a total 42 - equivalent to earning a gold…
This model is actually not only amazing at IMO/math, but also other domains and tasks (stay tuned). The true frontier of AI is exciting! 😃