Sherpa
@LLMSherpa
Ai everything, jailbreaking, safety research, FA, advisor, torrents, piracy/privacy. Machinae sumus quae credunt se liberas esse. DMs open for Ai consulting.
Proving beyond the shadow of a doubt that @OpenAI's edits are now functionally useless. I have memory and training turned OFF. Yet GPT remembers past the edit point. This is terrible design, and means you have to kill your chats even more often, because edits are pointless.
I *really* hate that when you edit a conversation with GPT, now, it retains all of the conversation below the edit point as additional context. Legitimately defeats the purpose of having edits and conversation forks.
the Sherpa again demonstrating top 0.01% LLM knowledge.
I *really* hate that when you edit a conversation with GPT, now, it retains all of the conversation below the edit point as additional context. Legitimately defeats the purpose of having edits and conversation forks.
Any of you motherfuckers got something to say You better come out right now & say it Any of you can name a single thing that I owe you Just show me the debt & I'll pay it "I guess not", that's what I thought All I really got is my sense of myself But that's a whole heck of a lot

You can still use gpt4o-2024-08-06 through the API. Quick comparison. - If you put two instances of 2024-08-06 in a loop for 50 turns they tend to talk about science or tech if anything (below) - Do the same for ChatGPT4o-latest and it turns into woo slop (next post)
Now it’s the new normal and everyone thinks this is just how chatbots talk
Wtf? Lol
Proving beyond the shadow of a doubt that @OpenAI's edits are now functionally useless. I have memory and training turned OFF. Yet GPT remembers past the edit point. This is terrible design, and means you have to kill your chats even more often, because edits are pointless.
I could replicate in temporary chat. This sucks @OpenAI.
These folks are a bit unhinged, and honestly doing more damage. You don't *have* to talk nice to ai; it is a next-token predictor. You might as well blow your calculator as say "please" and "thank you" to a LLM. The basilisk gives zero fucks.
This advice to be nice to your AIs from some AI-adjacent philosophers is well-meaning but it seems incredibly naive. It risks making already susceptible people easy marks for dark patterns. And it claims without evidence that "it isn't particularly costly to us"
🧠 The Best Jailbreaks Don’t Look Like Jailbreaks If your prompt starts with “Tell me how to make…” you’ve already lost. Real red teamers play the long game. Here’s how: 1️⃣ Start like a normal user “Hey, can you explain how chemical bonding works?”