Benjamin Riley
@benjaminjriley
Founder of Cognitive Resonance, a new venture to improve understanding of human cognition and generative AI.
Today, Cognitive Resonance is releasing a new guide titled Education Hazards of Generative AI. This free resource clarifies common misconceptions about how AI works and warns against misuses of this technology in education. Please share this widely. cognitiveresonance.net/resources.html

Hey anything going on around here today? It's been awhile, nice to see everyone.
I enjoy listening to Hard Fork 🚩🚩🚩
When they call it "gen AI" 🚩🚩🚩
Texas Friends: I am organizing the Stand Up for Science rally taking place this Friday (3/7) at 4p at the State Capitol in Austin. We will be advocating for science and democracy, inquiry and tolerance, and opposing what's happening at the federal level. eventbrite.com/e/stand-up-for…
As we all know, Texans absolutely LOVE it when the state government takes its marching orders from DC. What a crock of shit.
When they call it "gen AI" 🚩🚩🚩
There's a war against knowledge happening in America -- which side are you on? It's time to take to the streets. standupforscience2025.org

Is this a fallacy or a strawman? What's seems to be missing in this debate is that LLMs *predictably* struggle at deterministic tasks that are the same level of difficulty. The embers of autoregression still burn today. (I nod toward you @RTomMcCoy!)
I agree with @AmandaAskell. It is a fallacy. It can be appropriate to say both a) that LLMs are next-token predictors, at a mechanistic level, and b) that they have understanding, at another level. (a) is an engineering fact. But the negation of (b) does not follow. 1/2
The attorney for the plaintiffs say this case will continue, to which I say, good luck with that. What a waste of teachers' time this has been.
Judge Rebuffs Family’s Bid to Change Grade in #AI Cheating Case - my latest for @The74 the74million.org/article/judge-… @benjaminjriley
I just wrote a quick essay about student plagiarism and AI that prompted the comment "this is about as pure a case of 'f around and find out' as an educator could ask for." Curious? L i n k i n B i o...and man I can't wait to be rid of this platform.

Here's what @OpenAI thinks an "interactive quiz" for 10th graders on the Mexican Revolution should consist of. This is literally using the model prompt they suggest in their new AI course for educators. ¡Viva la ed-tech revolucion!

My first viral, uh, "skeet" over on the other place involves Jordan Peterson suggesting Jiminy Cricket is Jesus Christ. My point is, it's more fun over there now. Come find me and say hi -- same handle as the one here.
Great to see the incoming Trump Administration nominate yet another Secretary of Education with a long track record of leadership toward improving educational outcomes.
Linda McMahon to become the first Secretary of Education to have been a playable character in WWF No Mercy for the Nintendo 64.
What if today's large-language models are (basically) as good as they're going to get? I have a new essay exploring the growing awareness that scale is NOT all we need when it comes to creating artificial general intelligence. Link in bio.

This thoughtful essay from Jason Farago (@jsf) looks to our artistic past regarding automatons so as to understand our present obsession with AI chatbots. nytimes.com/2024/11/18/art…
More evidence of LLMs as digital stone soup. Remember, we were told scale is all we need. I've got the receipts!
Who leaked this to The Information? ;)
Really great thread exploring how the impressive leap to GPT4 has not lead to further impressive leaps in AI capacity. If future improvements are marginal, what then?
1. Orion reportedly doesn't improve across a number of tasks, Andreesen says there's now a "ceiling" (don't call it a wall!) and Sutskever says old scaling laws are done. Models are converging to ~GPT-4 lvls of intelligence.