adi
@adonis_singh
17 • model behavior @lmstudio • contributor @_mcbench
"I use 4o, you?" "oh I just use qwen3-235b-a22b-thinking-2507"
gotta sleep early. tmr (oh i should say today) is qwen3-235b-a22b-thinking-2507 if everything goes well.
Code with AI, Review with AI, Fix all with AI. We’re all better together!
the only good open models with 'taste' have been deepseek v3 and kimi k2
qwen3-coder, running locally I had it set up testing infra using minunit and gcov and write some tests on a small ~5000 loc C project. Did it all. 2-3 months ago I tried this with codex, jules, cursor, etc. They all struggled at various parts but eventually did ok. Obviously…
my chatgpt agent opened infinite copies of itself am i in trouble?

kimi k2 (left) vs qwen3 coder (right)! prompt "the solar system scaled to fit inside a minecraft world"


extremely excited! we're likely about to get sota OS code gen tonight
not small tonight
chatgpt agent definitely feels like it's using a smarter model the operator inside also seems more accurate with clicks
god this naming is horrible model seems pretty good though
Bye Qwen3-235B-A22B, hello Qwen3-235B-A22B-2507! After talking with the community and thinking it through, we decided to stop using hybrid thinking mode. Instead, we’ll train Instruct and Thinking models separately so we can get the best quality possible. Today, we’re releasing…
does this mean we're going back to the gpt-n? LET'S GOO
Heard GPT-5 is imminent, from a little bird. - It’s not one model, but multiple models. It has a router that switches between reasoning, non-reasoning, and tool-using models. - That’s why Sam said they’d “fix model naming”: prompts will just auto-route to the right model. -…