Austin Tackaberry
@AETackaberry
Founder ShareCal | prev @databricks @uber
cant believe it, AI is helping me fulfill my dream of reviewing code all day
The year is 2035. CAPTCHA v17 just launched. To access this site you must fly to our HQ in SF and complete our advanced level hopscotch course 3 times.
Ironically, I feel the coding vibes way more when I'm not vibe coding
Coding with AI is constantly wondering whether the models are getting worse or if my expectations are getting higher. Cursor Tab feels so dumb rn
im hearing that when openai researchers first maxxed out thinking tokens in o3-pro, they started calling it "thiel-mode"
it took god 2.5 million years to go from homo erectus to homo sapiens and it took openai only 2 years to go from gpt4 to o3, yet im the crazy one for praying to a church full of gpus
I'm sick and tired of these authoritarians telling me what to do. I thought this was America

THIS JUST IN: IN A FIRST-EVER SEVEN-WAY STARTUP SALE MERE DAYS BEFORE THE FTC-CRACKDOWN DEADLINE: 1. TRUELL IS TAKING HIS TALENTS TO MISSION BAY JOINING OPENAI 2. SUALEH WILL JOIN ALEXANDR AT "GOD-ALL-KNOWING" FORMERLY KNOWN AS "META" FORMERLY KNOWN AS "FACEBOOK" 3. ARVID AND…
BREAKING: META TO TRADE YANN AND 4 FRPs FOR SCHULMAN AND SHOLTOBRICKEN IN A THREE-WAY TRADE SCHULMAN CONTRACT EXPIRING AT THE END OF THE YEAR AND EXPECTS TO SIGN 3YR $200MM DEAL WITH META KARPATHY SAYS HE IS "HAPPY WHERE HE IS AT" BUT CEOS AROUND THE BAY EXPECT SIGNING SOON
vercel 🤝 apple my new favorite conspiracy: @rauchg kickback for all macos sales >24GB RAM
😔 that's hard @nextjs
We are in the "remote work" stage of AI. You have to constantly provide explicit context to the LLM, and it's exhausting. The next unlock in AI will be when LLMs can gather context through osmosis
when you tab but it doesnt tab-autocomplete, it does the tab-tab instead, so you tab to fix the tab-tab but then now you have two tab-tabs
can someone please tell me why @nextjs dev server eats RAM and like never stops eating RAM
when you make a 1-line change to update a model and your @braintrustdata evals improve accuracy and reduce latency by 50%

I want a @Spotify DJ mode for coding where the DJ is @ThePrimeagen randomly showing up between songs to roast me on how im using my IDE
Conventional wisdom in AI is "build now based on where AI model improvements will be in 6 months" but Cursor vs Devin shows that you should build now for where AI is now bc once you get PMF, nothing else matters
My favorite thing about @DevinAI is that they have figured out how to handle async communications. I can send as many messages as I want whenever I want without having to pause the agent. Also the UI/UX for setting up the devcontainer
So far, Devin > Cursor in Slack