cedric
@cedric_chee
SWE, Logorhythms | @fastdotai alumni, Independent LLM researcher | Code: https://github.com/cedrickchee | ex-entrepreneur @AntlerGlobal | genuinely curious
Can Grok 4 or Kimi K2 create a high-performance Minecraft torch mod with dynamic lighting in under a day? Gemini 2.5 Pro did it in days. It's unbelievable! Let's find out.
On day 4, after 6 attempts, I successfully built a Minecraft torch mod and a nice cube world. Real Minecraft doesn't have light from held torches. Insane. I'm genuinely amazed by Gemini 2.5 Pro's excellence.
New models on the @lmarena_ai WebDev arena: - Lobster - Nectarine - Starfish (not in this video) In the video compared to the 'Anonymous Chatbot' (aka o3-Alpha) from 17th July. Observations: - Lobster is closest to the o3-Alpha, but nowhere near as good - Nectarine was…
Qwen's most advanced reasoning model matches Gemini 2.5 Pro and o3 on benchmarks. It solved the Fantastic Four puzzle in 10 minutes with an unprecedented 81,920 thinking tokens!
🚀 We’re excited to introduce Qwen3-235B-A22B-Thinking-2507 — our most advanced reasoning model yet! Over the past 3 months, we’ve significantly scaled and enhanced the thinking capability of Qwen3, achieving: ✅ Improved performance in logical reasoning, math, science & coding…
Qwen3-Coder outperforms Claude Sonnet 4 in 90% of cases.
Let's compare Qwen 3 Coder & Sonnet 4 for code generation:
I was using Claude Code wrong... Here’s what I learnt and how I maximise Claude Code performance + Best tips that ACTUALLY useful 👇 Thread below
[new Youtube video] Writing Redis VRANGE using AI: how I prepare the context, and how the human in the loop is crucial to obtain good results: youtube.com/watch?v=3Z7T0D…
New Qwen3 reasoning model tomorrow hopefully 🙌
gotta sleep early. tmr (oh i should say today) is qwen3-235b-a22b-thinking-2507 if everything goes well.
Anthropic just 10x the rate limit tier 2 and up Tier 2 is the one you get when you've bought at least $40 of credits - previous rate limit for Sonnet 4 was 40,000 tokens per minute, the new limit is 450,000 tokens per minute! (Tier 2 request-per-minute has stayed at 1,000)
We've increased Tier 1-4 rate limits for Claude Opus 4 on the Anthropic API: Tier 1: 20K → 30K ITPM, 8K OTPM Tier 2: 40K → 450K ITPM, 90K OTPM Tier 3: 80K → 800K ITPM, 160K OTPM Tier 4: 200K → 2M ITPM, 400K OTPM
I’m Shawn, founder of Memories.ai, former researcher at Meta and CS PhD at University of Cambridge. Today we’re launching : we built the world’s first Large Visual Memory Model - to give AI human-like visual memories. Why visual memory? AI to…
Open the sub agents interface by running the /agents command. Sub agents are pre-configured personas that Claude Code can delegate tasks to. Key benefits: - sub agent operates in its own context - can be fine-tuned with detailed instructions for specific domains - reuse across…
Claude Code now lets you create teams of custom agents. We'd love to hear what you build.
🚀 What takeoff looks like. Gemini is gaining ground.
Many of you have been saying it, and it's true: @GeminiApp has momentum and it's growing! We've got tons in the pipeline and lots more to do!
Researchers found that fine-tuning a model on outputs from another model can transfer "dark knowledge". It's a potential way to detect if a model was truly trained from scratch or built on top of existing weights.
In a joint paper with @OwainEvans_UK as part of the Anthropic Fellows Program, we study a surprising phenomenon: subliminal learning. Language models can transmit their traits to other models, even in what appears to be meaningless data. x.com/OwainEvans_UK/…