Srini Pagidyala
@Srini_Pa
Building Cognitive AI to Unlock Real Intelligence | Co-Founder http://Aigo.ai | AGI Missionary
can it learn continuously? adapt autonomously? update its internal model in real-time? - cascading changes across its beliefs, behavior, and understanding. if the answer is ‘no’, it’s just another bloated LLM - no path to real intelligence, same architectural limitations.
🚨 BREAKING: OpenAI is dropping GPT-5 next week! —1M token input window, 100k output tokens —MCP support, parallel tool calls —Dynamic short + long reasoning —Uses Code Interpreter and other tools Codenames are: o3-alpha > nectarine (GPT-5) > lobster (mini) > starfish (nano)
“Do AI Models Help Produce Verified Bug Fixes?” A lot less than you might think, mostly helping beginners, sometimes sending people off in tangents. In many cases AI-debugging actually makes things worse. new study, link below:
We have 2 modes of thinking: System 1: Fast, automatic & emotional System 2: Slow, logical & rational The shocker? System 1 controls 95% of our daily choices. You think you're thinking—but you're reacting. That's where it gets dangerous...
5/ SUNK COST FALLACY You've: • Finish a terrible book • Keep stuff you never use • Stay in a job you despise Why? Because you already "invested" in it. But here's the truth: The only thing that matters is what it’s worth now.
🚨AI doesn’t “know.” It performs knowing. Here’s how synthetic epistemology works... 🔴Coherence > Truth 🔴Fluency ≠ Depth 🔴Stateless memory 🔴Probabilistic guessing 🔴Aesthetic over accuracy 🔴Plastic, never committed Oh, and when form outshines function, truth gets…
Any pre-seed or seed investor that turns down the chance to meet a founder cos they don’t like “the space” they are in; doesn’t get this business. Markets change, products change, people remain. At this stage, the only thing that matters is people. 100%.
Hallucination is baked into LLMs. Can't be eliminated, it's how they work. @DarioAmodei says LLMs hallucinate less than humans. But it's not about less or more. It's the differing & dangerous nature of the hallucination, making it unlikely LLMs will cause mass unemployment (1/n)
The answer is not to frankenstein LLMs but to start with the right approach: Cognitive AI -- to learn and think like humans do. petervoss.substack.com/p/agi-from-fir…
Hallucination is baked into LLMs. Can't be eliminated, it's how they work. @DarioAmodei says LLMs hallucinate less than humans. But it's not about less or more. It's the differing & dangerous nature of the hallucination, making it unlikely LLMs will cause mass unemployment (1/n)
We actually do - just need to scale now petervoss.substack.com/p/insa-integra…
completely agree with @DrJimFan - and think that the problem is far more general. outsiders (and sometimes insiders) don’t know the difference between easy problems and hard problems – and often wildly overextrapolate from the easy problems to the hard problems. every short…
I'm observing a mini Moravec's paradox within robotics: gymnastics that are difficult for humans are much easier for robots than "unsexy" tasks like cooking, cleaning, and assembling. It leads to a cognitive dissonance for people outside the field, "so, robots can parkour &…
I cant get over the fact that so many engineers still dont grok the fundamentals of what an LLM is. Repeat after me: its just pattern matching, it doesnt "know" anything
🚨No, LLMs aren't immortal, they are atemporal.
A neural network is not a “being”, @geoffreyhinton, any more than a Tetris app on your phone is. (Reinstall Tetris and it comes back to life!)
A neural network is not a “being”, @geoffreyhinton, any more than a Tetris app on your phone is. (Reinstall Tetris and it comes back to life!)
Geoffrey Hinton says LLMs are immortal. Destroy the hardware, save the weights. Rebuild them later, and “the very same being comes back to life.” But humans don't work that way. We're bound to our substrate. Not even Kurzweil gets a backup.
Anthropic could be bankrupted within the next few months, thanks to last week's barely covered legal ruling, which exposes the AI startup to billions to hundreds of billions in damages for its use of pirated, copyright-protected works.
"smart capital is waking up to course-correct." Yes, yes they are.
so why pour trillion$ into LLMs that can’t evolve, when cognitive AI delivers real intelligence at a tiny fraction of the cost with 1m x less data & compute and zero retraining? smart capital is waking up to course-correct. 🧵
cognitive AI decoded: srinipagidyala.substack.com/p/the-right-wa…
when hype fades, capital dries, illusion collapses, they’ll say, “no one saw it coming.” we did. we still do. and we’re building what really matters: cognitive AI.
wrong paradigms don't scale. LLMs = 100m candles trying to invent a lightbulb. $3T on GPUs, while ignoring the architecture’s fatal flaw? insanity.
and cognitive AI does it at a tiny fraction of the cost, but that’s the problem. cognitive AI doesn’t feed the hype cycle. it just delivers what matters: real intelligence, AGI.