Distributed State
@DistStateAndMe
cult leader/ exit liquidity at https://x.com/tplr_ai || Bittensor Maxi
Const had to remind me never sell out. For 4 months , templar has , and will continue to do something that has never been done before. Permissionless, Incentivised Pretraining . LFG
@const_reborn on why tplr. soundcloud.com/tplr-ai/the-mi…
OpenDev Weekly Update - July 22, 2025 Comprehensive Uniswap V3 documentation now live, subnet leasing crowdfunding shipping soon, anti-weight copying breakthrough, and active community governance discussions underway in Discord. Thread below 🧵
It would be incredibly useful if uv could handle CUDA installations, like conda does. This is the only thing preventing uv from being perfect for me cc @charliermarsh I'm willing to help w/ the implementation if you'd like this feature but don't have the bandwidth to implement.
BIG AI MEDIA WANTS TO SCARE YOU OUT OF LOCALHOST Ooooo scary electric bills, watch out! (localhost ftw)
Don’t let them scare you out of self-hosting You want at GPU at home?! RIP your electricity bill. Residential Electricity cost: $0.2/kWh Max 4090 power draw: 450W Cloud 4090 GPU: >$0.7/h -.-
“Pain is inevitable . You either suffer the pain of discipline , or the pain of regret”. Our miners have already abolished 8B, but we still choose discipline. This will pay off immensely
🧵 Why Templar isn't rushing to bigger models (and why this strategy will dominate) 0/ "Wen bigger model?" We get this question constantly. Here's why we're perfecting 8B models first—and why this approach will crush the competition when we do scale up. Thread below 👇
A 2min must-read to understand Templar stance/approach. 📚
🧵 Why Templar isn't rushing to bigger models (and why this strategy will dominate) 0/ "Wen bigger model?" We get this question constantly. Here's why we're perfecting 8B models first—and why this approach will crush the competition when we do scale up. Thread below 👇
Our edge? Relentless execution. Templar’s small, focused research and engineering team ships faster than anyone in #DecentralizedAI. Engineering velocity > academic accolades. #BuildInPublic #DeAI #Bittensor $TAO
Our fearless leader @DistStateAndMe finally graced the Bittensor Guru podcast with @KeithSingery! What a banger! On why Templar exists — "Humanity is fucked if we don't do this". From 500+ deployments to world-class CCLoco optimizer, from 4 to 6 engineers, from broken code to…
Bittensor Guru S2E10 - Subnet 3 @tplr_ai & 39 Basilica w/ Sam @DistStateAndMe Bittensor's home run swing at distributed training and compute explained by it's charismatic leader himself. $TAO
🚀 Since we rolled out our new, harder problem set a week ago, miners have: ⚡️ Used 6.8bn inference tokens, powered by @chutes_ai 🔼 Coordinated 4 major upgrades to agent architecture Agents have gone from 4% on our new eval set to 31% so far, in under 10 days
10k lines merged → 5.0 @gradients_ai The best open source AutoML scripts on the planet, soon to be built on Bittensor Conservative estimate: 2-3 months to parity post-launch. Reality: miners always exceed expectations. Buckle up 🚀
This post by @antirez had me nodding all along. The goal is to escape the natural local minima of AI generated code and use it to reason about your code and reach the optimal design faster.
High-level tools like🤗transformers’ Trainer help you start fast, but long-term they can hurt. In this (free) series, I’ll share: 1. Why these abstractions stall growth 2. How low-level tools like accelerate fix that 3. And how they supercharge your high-level use later
> To create intelligence by programming currency. 10 years in, still the coolest thing I've ever worked on.